var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https%3A%2F%2Fraw.githubusercontent.com%2FRoznn%2FEUSIPCO%2Fmain%2Feusipco2019url.bib&jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https%3A%2F%2Fraw.githubusercontent.com%2FRoznn%2FEUSIPCO%2Fmain%2Feusipco2019url.bib&jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https%3A%2F%2Fraw.githubusercontent.com%2FRoznn%2FEUSIPCO%2Fmain%2Feusipco2019url.bib&jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2019\n \n \n (500)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Calibration of Antenna Array with Dual Channel Switched Receiver System.\n \n \n \n \n\n\n \n Palanivelu, D. P.; and Oispuu, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CalibrationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 5 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902337,\n  author = {D. P. Palanivelu and M. Oispuu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Calibration of Antenna Array with Dual Channel Switched Receiver System},\n  year = {2019},\n  pages = {1-5},\n  abstract = {A precise estimation of the Direction of Arrival (DOA) of a signal is the fundamental functional requirement of antenna technology. A switched receiver system results in substantial reduction in hardware components, measurement complexity and cost of the entire system. In this paper, three diverse calibration methods are investigated on an array antenna with a dual channel switched receiver system. The calibration matrices are generated from the real measurements and implemented on the simulated and the real measurements. Robustness and flexibility of the calibration methods are compared in this paper. A conventional beamformer technique is implemented to perform Direction Finding (DF) of the calibrated system.},\n  keywords = {antenna arrays;array signal processing;calibration;direction-of-arrival estimation;matrix algebra;receivers;hardware components;measurement complexity;diverse calibration methods;calibration matrices;Direction Finding;calibrated system;antenna array calibration;dual channel switched receiver system;fundamental functional requirement;antenna technology;direction of arrival estimation;calibration methods;conventional beamformer technique;Calibration;Antenna arrays;Receivers;Direction-of-arrival estimation;Switches;Antenna measurements;Array signal processing;switched receiver system;antenna array;calibration;multichannel signal processing},\n  doi = {10.23919/EUSIPCO.2019.8902337},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532588.pdf},\n}\n\n
\n
\n\n\n
\n A precise estimation of the Direction of Arrival (DOA) of a signal is the fundamental functional requirement of antenna technology. A switched receiver system results in substantial reduction in hardware components, measurement complexity and cost of the entire system. In this paper, three diverse calibration methods are investigated on an array antenna with a dual channel switched receiver system. The calibration matrices are generated from the real measurements and implemented on the simulated and the real measurements. Robustness and flexibility of the calibration methods are compared in this paper. A conventional beamformer technique is implemented to perform Direction Finding (DF) of the calibrated system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multiple-Degradation Video Super-Resolution with Direct Inversion of the Low-Resolution Formation Model.\n \n \n \n \n\n\n \n Lopez-Tapia, S.; Lucas, A.; Molina, R.; and Katsaggelos, A. K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Multiple-DegradationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902338,\n  author = {S. Lopez-Tapia and A. Lucas and R. Molina and A. K. Katsaggelos},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Multiple-Degradation Video Super-Resolution with Direct Inversion of the Low-Resolution Formation Model},\n  year = {2019},\n  pages = {1-5},\n  abstract = {With the increase of popularity of high and ultra high definition displays, the need to improve the quality of content already obtained at much lower resolutions has grown. Since current video super-resolution methods are trained with a single degradation model (usually bicubic downsampling), they are not robust to mismatch between training and testing degradation models, in which case their performance deteriorates. In this work we propose a new Convolutional Neural Network for video super resolution which is robust to multiple degradation models and uses the pseudo-inverse image formation model as part of the network architecture during training. The experimental validation shows that our approach outperforms current state of the art methods.},\n  keywords = {convolutional neural nets;image resolution;image sampling;video signal processing;multiple-degradation video super-resolution;direct inversion;low-resolution formation model;ultra high definition displays;single degradation model;bicubic downsampling;testing degradation models;Convolutional Neural Network;video super resolution;multiple degradation models;pseudoinverse image formation model;Degradation;Convolution;Training;Kernel;Adaptation models;Europe;Video Super-resolution;convolutional neuronal networks;image formation},\n  doi = {10.23919/EUSIPCO.2019.8902338},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533612.pdf},\n}\n\n
\n
\n\n\n
\n With the increase of popularity of high and ultra high definition displays, the need to improve the quality of content already obtained at much lower resolutions has grown. Since current video super-resolution methods are trained with a single degradation model (usually bicubic downsampling), they are not robust to mismatch between training and testing degradation models, in which case their performance deteriorates. In this work we propose a new Convolutional Neural Network for video super resolution which is robust to multiple degradation models and uses the pseudo-inverse image formation model as part of the network architecture during training. The experimental validation shows that our approach outperforms current state of the art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Filtering-based Analysis Comparing the DFA with the CDFA for Wide Sense Stationary Processes.\n \n \n \n \n\n\n \n Berthelot, B.; Grivel, É.; Legrand, P.; André, J. -.; Mazoyer, P.; and Ferreira, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Filtering-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902339,\n  author = {B. Berthelot and É. Grivel and P. Legrand and J. -M. André and P. Mazoyer and T. Ferreira},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Filtering-based Analysis Comparing the DFA with the CDFA for Wide Sense Stationary Processes},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The detrended fluctuation analysis (DFA) is widely used to estimate the Hurst exponent. Although it can be outperformed by wavelet based approaches, it remains popular because it does not require a strong expertise in signal processing. Recently, some studies were dedicated to its theoretical analysis and its limits. More particularly, some authors focused on the so-called fluctuation function by searching a relation with an estimation of the normalized covariance function under some assumptions. This paper is complementary to these works. We first show that the square of the fluctuation function can be expressed in a similar matrix form for the DFA and the variant we propose, called Continuous-DFA (CDFA), where the global trend is constrained to be continuous. Then, using the above representation for wide-sense-stationary processes, the statistical mean of the square of the fluctuation function can be expressed from the correlation function of the signal and consequently from its power spectral density, without any approximation. The differences between both methods can be highlighted. It also confirms that they can be seen as ad hocwavelet based techniques.},\n  keywords = {covariance analysis;filtering theory;time series;wavelet transforms;CDFA;wide sense stationary processes;detrended fluctuation analysis;Hurst exponent;wavelet based approaches;strong expertise;signal processing;theoretical analysis;fluctuation function;normalized covariance function;wide-sense-stationary processes;correlation function;ad hocwavelet based techniques;continuous-DFA;Market research;Correlation;Signal processing;Estimation;Europe;Time series analysis;Fourier transforms;filter;interpretation;Hurst;DFA;CDFA.},\n  doi = {10.23919/EUSIPCO.2019.8902339},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533530.pdf},\n}\n\n
\n
\n\n\n
\n The detrended fluctuation analysis (DFA) is widely used to estimate the Hurst exponent. Although it can be outperformed by wavelet based approaches, it remains popular because it does not require a strong expertise in signal processing. Recently, some studies were dedicated to its theoretical analysis and its limits. More particularly, some authors focused on the so-called fluctuation function by searching a relation with an estimation of the normalized covariance function under some assumptions. This paper is complementary to these works. We first show that the square of the fluctuation function can be expressed in a similar matrix form for the DFA and the variant we propose, called Continuous-DFA (CDFA), where the global trend is constrained to be continuous. Then, using the above representation for wide-sense-stationary processes, the statistical mean of the square of the fluctuation function can be expressed from the correlation function of the signal and consequently from its power spectral density, without any approximation. The differences between both methods can be highlighted. It also confirms that they can be seen as ad hocwavelet based techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Segmentation of Surface Cracks Based on a Fully Convolutional Neural Network and Gated Scale Pooling.\n \n \n \n \n\n\n \n König, J.; Jenkins, M. D.; Barrie, P.; Mannion, M.; and Morison, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SegmentationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902341,\n  author = {J. König and M. D. Jenkins and P. Barrie and M. Mannion and G. Morison},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Segmentation of Surface Cracks Based on a Fully Convolutional Neural Network and Gated Scale Pooling},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Continual use, as well as aging, allows cracks to develop on concrete surfaces. These cracks are early indications of surface degradation. Therefore, regular inspection of surfaces is an important step in preventive maintenance, allowing reactive measures in a timely manner when cracks may impair the integrity of a structure. Automating parts of this inspection process provides the potential for improved performance and more efficient resource usage, as these inspections are usually carried out manually by trained inspectors. In this work we propose a Fully Convolutional, U-Net based, Neural Network architecture to automatically segment cracks. Conventional pooling operations in Convolutional Neural Networks are static operations that reduce the spatial size of an input, which may lead to loss of information as features are discarded. In this work we introduce and incorporate a novel pooling function into our architecture, Gated Scale Pooling. This operation aims to retain features from multiple scales as well as adapt proactively to the feature map being pooled. Training and testing of our network architecture is conducted on three different public surface crack datasets. It is shown that employing Gated Scale Pooling instead of Max Pooling achieves superior results. Furthermore, our experiments also indicate strongly competitive results when compared with other crack segmentation techniques.},\n  keywords = {convolutional neural nets;image segmentation;inspection;learning (artificial intelligence);preventive maintenance;structural engineering computing;surface cracks;surface degradation;inspection process;resource usage;inspections;neural network architecture;segment cracks;conventional pooling operations;convolutional neural networks;static operations;pooling function;crack segmentation techniques;gated scale pooling;max pooling;public surface crack datasets;Logic gates;Convolutional codes;Surface cracks;Training;Convolution;Decoding;Task analysis;Crack Segmentation;Deep Learning;CNN;Pooling},\n  doi = {10.23919/EUSIPCO.2019.8902341},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532490.pdf},\n}\n\n
\n
\n\n\n
\n Continual use, as well as aging, allows cracks to develop on concrete surfaces. These cracks are early indications of surface degradation. Therefore, regular inspection of surfaces is an important step in preventive maintenance, allowing reactive measures in a timely manner when cracks may impair the integrity of a structure. Automating parts of this inspection process provides the potential for improved performance and more efficient resource usage, as these inspections are usually carried out manually by trained inspectors. In this work we propose a Fully Convolutional, U-Net based, Neural Network architecture to automatically segment cracks. Conventional pooling operations in Convolutional Neural Networks are static operations that reduce the spatial size of an input, which may lead to loss of information as features are discarded. In this work we introduce and incorporate a novel pooling function into our architecture, Gated Scale Pooling. This operation aims to retain features from multiple scales as well as adapt proactively to the feature map being pooled. Training and testing of our network architecture is conducted on three different public surface crack datasets. It is shown that employing Gated Scale Pooling instead of Max Pooling achieves superior results. Furthermore, our experiments also indicate strongly competitive results when compared with other crack segmentation techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Second-order Time-Reassigned Synchrosqueezing Transform: Application to Draupner Wave Analysis.\n \n \n \n \n\n\n \n Fourer, D.; and Auger, F.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Second-orderPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902342,\n  author = {D. Fourer and F. Auger},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Second-order Time-Reassigned Synchrosqueezing Transform: Application to Draupner Wave Analysis},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper addresses the problem of efficiently jointly representing a non-stationary multicomponent signal in time and frequency. We introduce a novel enhancement of the time-reassigned synchrosqueezing method designed to compute sharpened and reversible representations of impulsive or strongly modulated signals. After establishing theoretical relations of the new proposed method with our previous results, we illustrate in numerical experiments the improvement brought by our proposal when applied on both synthetic and real-world signals. Our experiments deal with an analysis of the Draupner wave record for which we provide pioneered time-frequency analysis results.},\n  keywords = {signal representation;time-frequency analysis;wavelet transforms;Draupner wave analysis;nonstationary multicomponent signal;time-reassigned synchrosqueezing method;reversible representations;impulsive signals;strongly modulated signals;Draupner wave record;time-frequency analysis results;second-order time-reassigned synchrosqueezing transform;Time-frequency analysis;Transforms;Spectrogram;Frequency estimation;Microsoft Windows;Europe},\n  doi = {10.23919/EUSIPCO.2019.8902342},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528364.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the problem of efficiently jointly representing a non-stationary multicomponent signal in time and frequency. We introduce a novel enhancement of the time-reassigned synchrosqueezing method designed to compute sharpened and reversible representations of impulsive or strongly modulated signals. After establishing theoretical relations of the new proposed method with our previous results, we illustrate in numerical experiments the improvement brought by our proposal when applied on both synthetic and real-world signals. Our experiments deal with an analysis of the Draupner wave record for which we provide pioneered time-frequency analysis results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Robust and Sequential Approach for Detecting Gait Asymmetry Based on Radar Micro-Doppler Signatures.\n \n \n \n \n\n\n \n Seifert, A. -.; Reinhard, D.; Zoubir, A. M.; and Amin, M. G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902343,\n  author = {A. -K. Seifert and D. Reinhard and A. M. Zoubir and M. G. Amin},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Robust and Sequential Approach for Detecting Gait Asymmetry Based on Radar Micro-Doppler Signatures},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Recently, radar has become of increased interest to serve as an unobtrusive sensor for human motion analysis. In particular, for gait analysis, radar could supplement existing technologies to enhance medical diagnostics. Quick turn-around medical evaluation and diagnosis requires reduced data acquisition time which is of interest to patients, doctors, and therapists alike. Hence, we present a robust and sequential approach for detecting gait asymmetry based on radar micro-Doppler signatures. The results obtained based on experimental radar data indicate that high detection rates can be achieved at reduced measurement times compared to conventional approaches.},\n  keywords = {body sensor networks;data acquisition;Doppler radar;gait analysis;image motion analysis;medical signal processing;patient diagnosis;high detection rates;experimental radar data;sequential approach;robust approach;data acquisition time;turn-around medical evaluation;medical diagnostics;gait analysis;human motion analysis;unobtrusive sensor;radar microDoppler signatures;gait asymmetry;Legged locomotion;Radar detection;Doppler effect;Doppler radar;Uncertainty;Testing;sequential detection;robustness;gait analysis;Doppler radar;ambient assisted living},\n  doi = {10.23919/EUSIPCO.2019.8902343},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532661.pdf},\n}\n\n
\n
\n\n\n
\n Recently, radar has become of increased interest to serve as an unobtrusive sensor for human motion analysis. In particular, for gait analysis, radar could supplement existing technologies to enhance medical diagnostics. Quick turn-around medical evaluation and diagnosis requires reduced data acquisition time which is of interest to patients, doctors, and therapists alike. Hence, we present a robust and sequential approach for detecting gait asymmetry based on radar micro-Doppler signatures. The results obtained based on experimental radar data indicate that high detection rates can be achieved at reduced measurement times compared to conventional approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Topology Inference and Signal Representation Using Dictionary Learning.\n \n \n \n \n\n\n \n Ramezani-Mayiami, M.; and Skretting, K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TopologyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902344,\n  author = {M. Ramezani-Mayiami and K. Skretting},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Topology Inference and Signal Representation Using Dictionary Learning},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper presents a Joint Graph Learning and Signal Representation algorithm, called JGLSR, for simultaneous topology learning and graph signal representation via a learned over-complete dictionary. The proposed algorithm alternates between three main steps: sparse coding, dictionary learning, and graph topology inference. We introduce the “transformed graph” which can be considered as a projected graph in the transform domain spanned by the dictionary atoms. Simulation results via synthetic and real data show that the proposed approach has a higher performance when compared to the well-known algorithms for joint undirected graph topology inference and signal representation, when there is no information about the transform domain. Five performance measures are used to compare JGLSR with two conventional algorithms and show its higher performance.},\n  keywords = {graph theory;learning (artificial intelligence);signal representation;transform domain;dictionary atoms;joint undirected graph topology inference;dictionary learning;joint graph learning;simultaneous topology learning;graph signal representation;over-complete dictionary;algorithm alternates;transformed graph;projected graph;JGLSR;signal representation algorithm;topology inference;Dictionaries;Topology;Signal processing algorithms;Laplace equations;Signal processing;Signal representation;Machine learning;Graph signal processing;dictionary learning;topology inference;signal recovery;multi-variate signal},\n  doi = {10.23919/EUSIPCO.2019.8902344},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532727.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a Joint Graph Learning and Signal Representation algorithm, called JGLSR, for simultaneous topology learning and graph signal representation via a learned over-complete dictionary. The proposed algorithm alternates between three main steps: sparse coding, dictionary learning, and graph topology inference. We introduce the “transformed graph” which can be considered as a projected graph in the transform domain spanned by the dictionary atoms. Simulation results via synthetic and real data show that the proposed approach has a higher performance when compared to the well-known algorithms for joint undirected graph topology inference and signal representation, when there is no information about the transform domain. Five performance measures are used to compare JGLSR with two conventional algorithms and show its higher performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Modelling Approach to Generate Representative UAV Trajectories Using PSO.\n \n \n \n \n\n\n \n Salamat, B.; and Tonello, A. M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902345,\n  author = {B. Salamat and A. M. Tonello},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Modelling Approach to Generate Representative UAV Trajectories Using PSO},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We propose a trajectory generation algorithm (STGA) that represents realistically and stochastically trajectories followed by unmanned air vehicles (UAVs), in particular quadrotors UAVs. It is meant to be a tool for testing localization, state estimation and control algorithms. We propose to firstly model a number of representative flight scenarios. For each scenario, stochastic trajectories are generated. They follow a parametric non-linear model whose parameters are determined using a multi-objective evolutionary optimization method called particle swarm optimization (PSO). Numerical results are reported to verify feasibility in comparison to pure random unconstrained trajectory algorithm.},\n  keywords = {aerospace control;autonomous aerial vehicles;genetic algorithms;particle swarm optimisation;path planning;trajectory control;generate representative UAV trajectories;PSO;trajectory generation algorithm;unmanned air vehicles;state estimation;control algorithms;representative flight scenarios;stochastic trajectories;multiobjective evolutionary optimization method;pure random unconstrained trajectory algorithm;nonlinear model;quadrotors UAVs;particle swarm optimization;Trajectory;Stochastic processes;Unmanned aerial vehicles;Acceleration;Signal processing algorithms;Europe;Signal processing;Stochastic trajectory generation;unmanned air vehicles (UAVs);particle swarm optimization},\n  doi = {10.23919/EUSIPCO.2019.8902345},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533493.pdf},\n}\n\n
\n
\n\n\n
\n We propose a trajectory generation algorithm (STGA) that represents realistically and stochastically trajectories followed by unmanned air vehicles (UAVs), in particular quadrotors UAVs. It is meant to be a tool for testing localization, state estimation and control algorithms. We propose to firstly model a number of representative flight scenarios. For each scenario, stochastic trajectories are generated. They follow a parametric non-linear model whose parameters are determined using a multi-objective evolutionary optimization method called particle swarm optimization (PSO). Numerical results are reported to verify feasibility in comparison to pure random unconstrained trajectory algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 2-D non-separable integer implementation of paraunitary filter bank based on the quaternionic multiplier block-lifting structure.\n \n \n \n \n\n\n \n Rybenkov, E. V.; and Petrovsky, N. A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"2-DPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902489,\n  author = {E. V. Rybenkov and N. A. Petrovsky},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {2-D non-separable integer implementation of paraunitary filter bank based on the quaternionic multiplier block-lifting structure},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper presents a novel technique of factorization for 2-D non-separable quaternionic paraunitary filter banks (2-D NSQ-PUFB) based on the integer-to-integer invertible quaternionic multipliers. Two-dimensional factorization structures called ”16in-16out” and ”64in-64out” respectively for 4-channel and 8-channel Q-PUFB based on the proposed technique are shown. Comparison of the energy compaction level between the 2-D separable Q-PUFB based on the 1D Q-PUFB (8×24 Q-PUFB one-dimensional coding gain is CG1D = 9.38 dB) and 2D non-separable Q-PUFB (8 × 24 2D-NSQ-PUFB, multidimensional coding gain is CGMD = 17.15 dB) for the Barbara image shows that the 2-D non-separable Q-PUFB generates a higher percentage of small-value coefficients, hence creates a significant increase in the number of zero trees. This holds the key to our coder's superior performance.},\n  keywords = {channel bank filters;image coding;trees (mathematics);quaternionic multiplier block-lifting structure;integer-to-integer invertible quaternionic multipliers;two-dimensional factorization structures;8-channel Q-PUFB;2D nonseparable Q-PUFB;2D-NSQ-PUFB;2D nonseparable integer implementation;2D nonseparable quaternionic paraunitary filter banks;energy compaction level;8×24 Q-PUFB one-dimensional coding gain;Barbara image;zero trees;Quaternions;Two dimensional displays;Transforms;TV;Image coding;Europe;Signal processing;quaternionic paraunitary filter banks;two-dimensional;non-separable transform},\n  doi = {10.23919/EUSIPCO.2019.8902489},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532500.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a novel technique of factorization for 2-D non-separable quaternionic paraunitary filter banks (2-D NSQ-PUFB) based on the integer-to-integer invertible quaternionic multipliers. Two-dimensional factorization structures called ”16in-16out” and ”64in-64out” respectively for 4-channel and 8-channel Q-PUFB based on the proposed technique are shown. Comparison of the energy compaction level between the 2-D separable Q-PUFB based on the 1D Q-PUFB (8×24 Q-PUFB one-dimensional coding gain is CG1D = 9.38 dB) and 2D non-separable Q-PUFB (8 × 24 2D-NSQ-PUFB, multidimensional coding gain is CGMD = 17.15 dB) for the Barbara image shows that the 2-D non-separable Q-PUFB generates a higher percentage of small-value coefficients, hence creates a significant increase in the number of zero trees. This holds the key to our coder's superior performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimation of Measurement-Noise Variance for Variable-Step-Size NLMS Filters.\n \n \n \n \n\n\n \n Strutz, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EstimationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902491,\n  author = {T. Strutz},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Estimation of Measurement-Noise Variance for Variable-Step-Size NLMS Filters},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Least-mean-square (LMS) filters are a well-studied processing technique that adapts iteratively to an unknown process. It has been proven that the parameters of the LMS filter converge to the optimum (Wiener) solution. Unfortunately, this is only possible if the adaptation steps are infinitely small. Small steps, however, result in slow convergence. The challenge is to vary the step size such that they are large, when the LMS filter coefficients are far from their optimal values and to lower the step size, when the adaptive system is approaching the optimum. State-of-the-art approaches to variable-step-size determination take estimates of the measurement noise into account for optimal performance. This paper proposes a new technique for the estimation of the measurement-noise variance, which also can deal with sudden changes of the unknown system. Based on investigations with a broad range of experimental conditions in terms of test signals and different measurement-noise levels, it is shown that the proposed estimation technique is robust to changes of the unknown system and outperforms other methods.},\n  keywords = {estimation theory;filtering theory;least mean squares methods;adaptive system;least-mean-square filters;measurement-noise variance estimation;variable-step-size NLMS filter determination;optimum Wiener solution;Estimation;Noise measurement;Convergence;Adaptive systems;Proposals;Acoustic measurements;Europe;least mean squares;NLMS;measurement-noise estimation;adaptive systems;change detection},\n  doi = {10.23919/EUSIPCO.2019.8902491},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570524659.pdf},\n}\n\n
\n
\n\n\n
\n Least-mean-square (LMS) filters are a well-studied processing technique that adapts iteratively to an unknown process. It has been proven that the parameters of the LMS filter converge to the optimum (Wiener) solution. Unfortunately, this is only possible if the adaptation steps are infinitely small. Small steps, however, result in slow convergence. The challenge is to vary the step size such that they are large, when the LMS filter coefficients are far from their optimal values and to lower the step size, when the adaptive system is approaching the optimum. State-of-the-art approaches to variable-step-size determination take estimates of the measurement noise into account for optimal performance. This paper proposes a new technique for the estimation of the measurement-noise variance, which also can deal with sudden changes of the unknown system. Based on investigations with a broad range of experimental conditions in terms of test signals and different measurement-noise levels, it is shown that the proposed estimation technique is robust to changes of the unknown system and outperforms other methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n optimization of Signal Processing Chains: Application to Cascaded Filters.\n \n \n \n \n\n\n \n Hugeat, A.; Betnard, J.; Friedt, J. . -.; Bourgeois, P. . -.; and Goavec-Merou, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"optimizationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902492,\n  author = {A. Hugeat and J. Betnard and J. . -M. Friedt and P. . -Y. Bourgeois and G. Goavec-Merou},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {optimization of Signal Processing Chains: Application to Cascaded Filters},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The design of digital signal processing chains must meet competing requirements by maximizing performance (e.g. rejection in the case of a filter) while reducing resource consumption. In this paper, we explore a new methodology for designing chains assembled by cascading basic processing blocks. We apply this optimization strategy to the example of a cascade of Finite Impulse Response (FIR) filters. While the design of cascaded FIR filters generally focuses on low-level details, we provide a highlevel model. This development strategy can be generalized for any signal processing chain made by assembling blocks whose resource consumption is qualified: a solver aims at meeting multiple objectives including minimizing resource consumption or optimizing performance. This result is then transformed into a synthesizable solution targeting a reconfigurable Field Programmable Gate Array (FPGA). The experiments show that this approach gives efficient results, both on the quality of the signal filtering and the processing resource used for the design.},\n  keywords = {field programmable gate arrays;FIR filters;optimisation;minimizing resource consumption;optimizing performance;signal filtering;processing resource;signal processing chain;cascaded filters;digital signal processing chains;competing requirements;basic processing blocks;optimization strategy;finite impulse response filters;cascaded FIR filters;assembling blocks;Finite impulse response filters;Mathematical model;Optimization;Field programmable gate arrays;Software;Europe;Field Programmable Gate Array;Finite Impulse Response filter;optimization},\n  doi = {10.23919/EUSIPCO.2019.8902492},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533706.pdf},\n}\n\n
\n
\n\n\n
\n The design of digital signal processing chains must meet competing requirements by maximizing performance (e.g. rejection in the case of a filter) while reducing resource consumption. In this paper, we explore a new methodology for designing chains assembled by cascading basic processing blocks. We apply this optimization strategy to the example of a cascade of Finite Impulse Response (FIR) filters. While the design of cascaded FIR filters generally focuses on low-level details, we provide a highlevel model. This development strategy can be generalized for any signal processing chain made by assembling blocks whose resource consumption is qualified: a solver aims at meeting multiple objectives including minimizing resource consumption or optimizing performance. This result is then transformed into a synthesizable solution targeting a reconfigurable Field Programmable Gate Array (FPGA). The experiments show that this approach gives efficient results, both on the quality of the signal filtering and the processing resource used for the design.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Löwner-Based Tensor Decomposition for Blind Source Separation in Atrial Fibrillation ECGs.\n \n \n \n\n\n \n de Oliveira , P. M. R.; and Zarzoso, V.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902493,\n  author = {P. M. R. {de Oliveira} and V. Zarzoso},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Löwner-Based Tensor Decomposition for Blind Source Separation in Atrial Fibrillation ECGs},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The estimation of the atrial activity (AA) signal in electrocardiogram (ECG) recordings is an important step in the noninvasive analysis of atrial fibrillation (AF), the most common sustained cardiac arrhythmia in clinical practice. Recently, this blind source separation (BSS) problem has been formulated as a tensor factorization, based on the block term decomposition (BTD) of a data tensor built from Hankel matrices of the observed ECG. However, this tensor factorization technique was precisely assessed only in segments with long R-R intervals and with the AA well defined in the TQ segment, where ventricular activity (VA) is absent. Due to the chaotic nature of AA in AF, segments with disorganized or weak AA and with short R-R intervals are quite more common in persistent AF, posing some difficulties to the BSS methods to extract the AA signal, regarding performance and computational cost. In this paper, the BTD built from Löwner matrices is proposed as a method to separate VA from AA in these challenging scenarios. Experimental results obtained in a population of 10 patients show that the Löwner-based BTD outperforms the Hankel-based BTD and two well-known matrix-based methods in terms of atrial signal estimation quality and computational cost.},\n  keywords = {blind source separation;electrocardiography;Hankel matrices;medical disorders;medical signal processing;tensors;atrial fibrillation ECG;cardiac arrhythmia;clinical practice;blind source separation problem;block term decomposition;data tensor;Hankel matrices;tensor factorization technique;R-R intervals;TQ segment;ventricular activity;BSS methods;computational cost;Lowner matrices;Hankel-based BTD;matrix-based methods;atrial signal estimation quality;atrial activity signal;electrocardiogram recordings;noninvasive analysis;Lowner-based tensor decomposition;Lowner-based BTD;Electrocardiography;Tensors;Matrix decomposition;Blind source separation;Estimation;Europe;Block Term Decomposition;Blind Source Separation;Löwner Matrices;Atrial Fibrillation;Electrocardiogram},\n  doi = {10.23919/EUSIPCO.2019.8902493},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The estimation of the atrial activity (AA) signal in electrocardiogram (ECG) recordings is an important step in the noninvasive analysis of atrial fibrillation (AF), the most common sustained cardiac arrhythmia in clinical practice. Recently, this blind source separation (BSS) problem has been formulated as a tensor factorization, based on the block term decomposition (BTD) of a data tensor built from Hankel matrices of the observed ECG. However, this tensor factorization technique was precisely assessed only in segments with long R-R intervals and with the AA well defined in the TQ segment, where ventricular activity (VA) is absent. Due to the chaotic nature of AA in AF, segments with disorganized or weak AA and with short R-R intervals are quite more common in persistent AF, posing some difficulties to the BSS methods to extract the AA signal, regarding performance and computational cost. In this paper, the BTD built from Löwner matrices is proposed as a method to separate VA from AA in these challenging scenarios. Experimental results obtained in a population of 10 patients show that the Löwner-based BTD outperforms the Hankel-based BTD and two well-known matrix-based methods in terms of atrial signal estimation quality and computational cost.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Jammer detection in M-QAM-OFDM by learning a Dynamic Bayesian Model for the Cognitive Radio.\n \n \n \n \n\n\n \n Krayani, A.; Farrukh, M.; Baydoun, M.; Marcenaro, L.; Gao, Y.; and S.Regazzoni, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"JammerPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902495,\n  author = {A. Krayani and M. Farrukh and M. Baydoun and L. Marcenaro and Y. Gao and C. S.Regazzoni},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Jammer detection in M-QAM-OFDM by learning a Dynamic Bayesian Model for the Cognitive Radio},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Communication and information field has witnessed recent developments in wireless technologies. Among such emerging technologies, the Internet of Things (IoT) is gaining a lot of popularity and attention in almost every field. IoT devices have to be equipped with cognitive capabilities to enhance spectrum utilization by sensing and learning the surrounding environment. IoT network is susceptible to the various jamming attacks which interrupt users communication. In this paper, two systems (Single and Bank-Parallel) have been proposed to implement a Dynamic Bayesian Network (DBN) Model to detect jammer in Orthogonal Frequency Division Multiplexing (OFDM) sub-carriers modulated with different M-QAM. The comparison of the two systems has been evaluated by simulation results after analyzing the effect of self-organizing map's (SOM) size on the performance of the proposed systems in relation to M-QAM modulation.},\n  keywords = {belief networks;cognitive radio;jamming;learning (artificial intelligence);OFDM modulation;quadrature amplitude modulation;self-organising feature maps;telecommunication computing;jammer detection;Dynamic Bayesian Model;cognitive radio;information field;wireless technologies;IoT devices;cognitive capabilities;spectrum utilization;IoT network;jamming attacks;interrupt users communication;Bank-Parallel;Dynamic Bayesian Network Model;Orthogonal Frequency Division Multiplexing sub-carriers;OFDM;different M-QAM;M-QAM modulation;Jamming;OFDM;Quadrature amplitude modulation;Bayes methods;Analytical models;Frequency modulation;Cognitive Radio;IoT;OFDM;Dynamic Bayesian Network;Kalman Filter;Particle Filter},\n  doi = {10.23919/EUSIPCO.2019.8902495},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533779.pdf},\n}\n\n
\n
\n\n\n
\n Communication and information field has witnessed recent developments in wireless technologies. Among such emerging technologies, the Internet of Things (IoT) is gaining a lot of popularity and attention in almost every field. IoT devices have to be equipped with cognitive capabilities to enhance spectrum utilization by sensing and learning the surrounding environment. IoT network is susceptible to the various jamming attacks which interrupt users communication. In this paper, two systems (Single and Bank-Parallel) have been proposed to implement a Dynamic Bayesian Network (DBN) Model to detect jammer in Orthogonal Frequency Division Multiplexing (OFDM) sub-carriers modulated with different M-QAM. The comparison of the two systems has been evaluated by simulation results after analyzing the effect of self-organizing map's (SOM) size on the performance of the proposed systems in relation to M-QAM modulation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Data Augmentation Approach for Sampling Gaussian Models in High Dimension.\n \n \n \n \n\n\n \n Marnissi, Y.; Abboud, D.; Chouzenoux, E.; Pesquet, J. -.; El-Badaoui, M.; and Benazza-Benyahia, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902496,\n  author = {Y. Marnissi and D. Abboud and E. Chouzenoux and J. -C. Pesquet and M. El-Badaoui and A. Benazza-Benyahia},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Data Augmentation Approach for Sampling Gaussian Models in High Dimension},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Recently, methods based on Data Augmentation (DA) strategies have shown their efficiency for dealing with high dimensional Gaussian sampling within Gibbs samplers compared to iterative-based sampling (e.g., Perturbation-optimization). However, they are limited by the feasibility of the direct sampling of the auxiliary variable. This paper reviews DA sampling algorithms for Gaussian sampling and proposes a DA method which is especially useful when direct sampling of the auxiliary variable is not straightforward from a computational viewpoint. Experiments in two vibration analysis applications show the good performance of the proposed algorithm.},\n  keywords = {Gaussian processes;iterative methods;regression analysis;sampling methods;auxiliary variable;DA method;DA sampling algorithms;direct sampling;Gibbs samplers;high dimensional Gaussian sampling;data augmentation strategies;sampling Gaussian models;data augmentation approach;Covariance matrices;Signal processing algorithms;Vibrations;Optimization;Correlation;Inverse problems;Europe;Data augmentation;Auxiliary variables;MCMC;Gaussian;Correlation;Bayesian.},\n  doi = {10.23919/EUSIPCO.2019.8902496},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533539.pdf},\n}\n\n
\n
\n\n\n
\n Recently, methods based on Data Augmentation (DA) strategies have shown their efficiency for dealing with high dimensional Gaussian sampling within Gibbs samplers compared to iterative-based sampling (e.g., Perturbation-optimization). However, they are limited by the feasibility of the direct sampling of the auxiliary variable. This paper reviews DA sampling algorithms for Gaussian sampling and proposes a DA method which is especially useful when direct sampling of the auxiliary variable is not straightforward from a computational viewpoint. Experiments in two vibration analysis applications show the good performance of the proposed algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Complexity-Reduced Suboptimal Equalization with Monte Carlo Based MIMO Detectors.\n \n \n \n \n\n\n \n Fernandes, G. C. G.; and Bruno, M. G. S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Complexity-ReducedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902499,\n  author = {G. C. G. Fernandes and M. G. S. Bruno},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Complexity-Reduced Suboptimal Equalization with Monte Carlo Based MIMO Detectors},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Optimal detection in multiple-input multiple-output (MIMO) frequency-selective systems is known to have exponential complexity in the number of transmitter antennas and channel length resulting from intersymbol interference. Several studies focus on suboptimal detectors, proposing trade-offs between computational complexity and bit error rate. In this paper, we model the detection problem using factor graphs and apply the sum-product algorithm to derive the optimal detector. Then we propose a novel suboptimal particle filter detector, based on sequential Monte Carlo, followed by a Markov chain Monte Carlo step to further enhance performance. The proposed algorithm exchanges the exponential complexity in channel length for a linear complexity in the number of particles and achieves better bit error rate than the linear minimum mean square error (LMMSE) detector.},\n  keywords = {computational complexity;equalisers;error statistics;graph theory;intersymbol interference;Markov processes;MIMO communication;Monte Carlo methods;particle filtering (numerical methods);signal detection;sequential Monte Carlo;Markov chain Monte Carlo step;exponential complexity;channel length;linear complexity;bit error rate;square error detector;complexity-reduced suboptimal equalization;MIMO detectors;optimal detection;multiple-input multiple-output frequency-selective systems;transmitter antennas;intersymbol interference;computational complexity;detection problem;factor graphs;sum-product algorithm;optimal detector;novel suboptimal particle filter detector;Monte Carlo based MIMO detectors;suboptimal particle filter detector;linear minimum mean square error detector;LMMSE detector;Detectors;Monte Carlo methods;Mathematical model;MIMO communication;Transmitting antennas;Complexity theory;Bit error rate;Equalization;MIMO detection;particle filter;Markov chain Monte Carlo;factor graphs},\n  doi = {10.23919/EUSIPCO.2019.8902499},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532906.pdf},\n}\n\n
\n
\n\n\n
\n Optimal detection in multiple-input multiple-output (MIMO) frequency-selective systems is known to have exponential complexity in the number of transmitter antennas and channel length resulting from intersymbol interference. Several studies focus on suboptimal detectors, proposing trade-offs between computational complexity and bit error rate. In this paper, we model the detection problem using factor graphs and apply the sum-product algorithm to derive the optimal detector. Then we propose a novel suboptimal particle filter detector, based on sequential Monte Carlo, followed by a Markov chain Monte Carlo step to further enhance performance. The proposed algorithm exchanges the exponential complexity in channel length for a linear complexity in the number of particles and achieves better bit error rate than the linear minimum mean square error (LMMSE) detector.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n TensMIL2: Improved Multiple Instance Classification Through Tensor Decomposition and Instance Selection.\n \n \n \n \n\n\n \n Papastergiou, T.; Zacharaki, E. I.; and Megalooikonomou, V.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TensMIL2:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902500,\n  author = {T. Papastergiou and E. I. Zacharaki and V. Megalooikonomou},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {TensMIL2: Improved Multiple Instance Classification Through Tensor Decomposition and Instance Selection},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Multiple instance learning (MIL) has shown great potential in addressing weakly supervised problems in which class labels are provided for sets (bags) of instances. The main challenge in MIL comes from the lack of knowledge on the pertinence of each individual instance in class discrimination. In this paper we propose TensMIL2, a generic unsupervised feature extraction procedure based on non-negative PARAFAC (CP) decomposition, combined with instance selection and MIL classification, that is efficient also for partially observed datasets. Evaluation of our algorithm in standard MIL benchmark datasets showed that TensMIL2 is performing better than state-of-the-art algorithms in most of the cases. Moreover, the comparison of the proposed feature representation via CP decomposition to the previously used features, showed an increase in performance in most of the cases, in both full and partially observed (90% missing values) datasets.},\n  keywords = {feature extraction;learning (artificial intelligence);matrix decomposition;pattern classification;multiple instance learning;class labels;class discrimination;TensMIL2;nonnegative PARAFAC decomposition;instance selection;MIL classification;partially observed datasets;MIL benchmark datasets;CP decomposition;multiple instance classification;tensor decomposition;unsupervised feature extraction procedure;Tensors;Feature extraction;Classification algorithms;Matrix decomposition;Training;Predictive models;Image classification;multiple instance learning;constrained PARAFAC (CP) tensor decomposition;image classification},\n  doi = {10.23919/EUSIPCO.2019.8902500},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529361.pdf},\n}\n\n
\n
\n\n\n
\n Multiple instance learning (MIL) has shown great potential in addressing weakly supervised problems in which class labels are provided for sets (bags) of instances. The main challenge in MIL comes from the lack of knowledge on the pertinence of each individual instance in class discrimination. In this paper we propose TensMIL2, a generic unsupervised feature extraction procedure based on non-negative PARAFAC (CP) decomposition, combined with instance selection and MIL classification, that is efficient also for partially observed datasets. Evaluation of our algorithm in standard MIL benchmark datasets showed that TensMIL2 is performing better than state-of-the-art algorithms in most of the cases. Moreover, the comparison of the proposed feature representation via CP decomposition to the previously used features, showed an increase in performance in most of the cases, in both full and partially observed (90% missing values) datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Defining Graph Signal Distances Using an Optimal Mass Transport Framework.\n \n \n \n \n\n\n \n Juhlin, M.; Elvander, F.; and Jakobsson, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DefiningPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902502,\n  author = {M. Juhlin and F. Elvander and A. Jakobsson},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Defining Graph Signal Distances Using an Optimal Mass Transport Framework},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this work, we propose a novel measure of distance for quantifying dissimilarities between signals observed on a graph. Building on a recently introduced optimal mass transport framework, the distance measure is formed using the second-order statistics of the graph signals, allowing for comparison of graph processes without direct access to the signals themselves, while explicitly taking the dynamics of the underlying graph into account. The behavior of the proposed distance notion is illustrated in a graph signal classification scenario, indicating attractive modeling properties, as compared to the standard Euclidean metric.},\n  keywords = {graph theory;higher order statistics;signal classification;graph signal distances;optimal mass transport framework;distance measure;second-order statistics;graph processes;graph signal classification scenario;standard Euclidean metric;Covariance matrices;Spectral analysis;Euclidean distance;Earth;Temperature measurement;Europe;Graph Signal Processing;Optimal mass transport;Graph signal similarity},\n  doi = {10.23919/EUSIPCO.2019.8902502},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533540.pdf},\n}\n\n
\n
\n\n\n
\n In this work, we propose a novel measure of distance for quantifying dissimilarities between signals observed on a graph. Building on a recently introduced optimal mass transport framework, the distance measure is formed using the second-order statistics of the graph signals, allowing for comparison of graph processes without direct access to the signals themselves, while explicitly taking the dynamics of the underlying graph into account. The behavior of the proposed distance notion is illustrated in a graph signal classification scenario, indicating attractive modeling properties, as compared to the standard Euclidean metric.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Investigating Time-Varying Brain Connectivity with Functional Magnetic Resonance Imaging using Sequential Monte Carlo.\n \n \n \n \n\n\n \n Ambrosi, P.; Costagli, M.; Kuruoğlu, E. E.; Biagi, L.; Buonincontri, G.; and Tosetti, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"InvestigatingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902503,\n  author = {P. Ambrosi and M. Costagli and E. E. Kuruoğlu and L. Biagi and G. Buonincontri and M. Tosetti},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Investigating Time-Varying Brain Connectivity with Functional Magnetic Resonance Imaging using Sequential Monte Carlo},\n  year = {2019},\n  pages = {1-5},\n  abstract = {There is a rising interest in studying the degree of connection and the causal relationships between brain regions, as a growing body of evidence suggests that features of these interactions could play a role as markers in a host of neurological diseases. The vast majority of brain connectivity studies treats the brain network as stationary. New insights on the temporal behaviour of these connections could significantly improve our understanding of brain networking in both physiology and pathology. In this paper, we propose the application of a computational methodology, named Particle Filter (PF), to functional Magnetic Resonance Imaging (fMRI) data. The PF algorithm aims to estimate time-varying hidden variables of a given observational model through a Sequential Monte Carlo approach. The fMRI data are represented as a first-order linear time-varying Vector Autoregression model (VAR). On simulated time series, the PF approach effectively detected and enabled to follow time-varying hidden parameters and it captured causal relationships among signals. The method was also applied to real fMRI data and provided similar results to those obtained by using a different proxy measure of causal dependency, that is, correlation between delayed time series. Interestingly, the PF approach also enabled to detect statistically significant changes in the cause-effect relationships between areas, which correlated with the underlying stimulation pattern delivered to subjects during the fMRI acquisition.},\n  keywords = {autoregressive processes;biomedical MRI;brain;diseases;medical image processing;medical signal processing;Monte Carlo methods;neurophysiology;particle filtering (numerical methods);time series;time-varying brain connectivity;functional magnetic resonance imaging;causal relationships;neurological diseases;brain connectivity studies;brain network;temporal behaviour;brain networking;physiology;pathology;particle filter algorithm;Magnetic Resonance Imaging data;time-varying hidden variables;sequential Monte Carlo approach;simulated time series;causal dependency;delayed time series;fMRI acquisition;first-order linear time-varying vector autoregression model;Functional magnetic resonance imaging;Brain modeling;Mathematical model;Correlation;Monte Carlo methods;Signal processing algorithms;Data models;brain connectivity;fMRI;sequential Monte Carlo;Particle Filtering;VAR model},\n  doi = {10.23919/EUSIPCO.2019.8902503},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533580.pdf},\n}\n\n
\n
\n\n\n
\n There is a rising interest in studying the degree of connection and the causal relationships between brain regions, as a growing body of evidence suggests that features of these interactions could play a role as markers in a host of neurological diseases. The vast majority of brain connectivity studies treats the brain network as stationary. New insights on the temporal behaviour of these connections could significantly improve our understanding of brain networking in both physiology and pathology. In this paper, we propose the application of a computational methodology, named Particle Filter (PF), to functional Magnetic Resonance Imaging (fMRI) data. The PF algorithm aims to estimate time-varying hidden variables of a given observational model through a Sequential Monte Carlo approach. The fMRI data are represented as a first-order linear time-varying Vector Autoregression model (VAR). On simulated time series, the PF approach effectively detected and enabled to follow time-varying hidden parameters and it captured causal relationships among signals. The method was also applied to real fMRI data and provided similar results to those obtained by using a different proxy measure of causal dependency, that is, correlation between delayed time series. Interestingly, the PF approach also enabled to detect statistically significant changes in the cause-effect relationships between areas, which correlated with the underlying stimulation pattern delivered to subjects during the fMRI acquisition.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Speech Reconstruction Algorithm via Iteratively Reweighted ℓ2 Minimization for MFCC Codec.\n \n \n \n \n\n\n \n Min, G.; Zhang, X.; Liu, X.; Zhang, C.; and Chen, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902504,\n  author = {G. Min and X. Zhang and X. Liu and C. Zhang and Y. Chen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Speech Reconstruction Algorithm via Iteratively Reweighted ℓ2 Minimization for MFCC Codec},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper presents an effective method to address the inverse problem of Mel-frequency cepstral analysis, and describes how to reconstruct the speech waveforms from Melfrequency cepstral coefficients (MFCCs) directly. To exploit the sparse characteristics of speech in the frequency domain, an iteratively reweighted ℓ2 minimization method is proposed to cope with the under-determined nature of the reconstruction problem. The lost phase information during Mel-frequency cepstral analysis procedure is recovered by the inverse shorttime Fourier transform magnitude algorithm. Experiments are conducted over the TIMIT database and evaluated by several different kinds of measures. Experimental results demonstrate that the proposed method recovers speech with high articulation and intelligibility. Specifically, it sounds very close to the original speech when using the high-resolution MFCCs, the average STOI, PESQ score reaches 93% and 4.0, respectively. This method could be easily used for MFCC codec at low bit rate.},\n  keywords = {cepstral analysis;feature extraction;Fourier transforms;iterative methods;minimisation;speech intelligibility;lost phase information;Mel-frequency cepstral analysis procedure;TIMIT database;speech intelligibility;speech reconstruction algorithm;inverse problem;speech waveforms;Melfrequency cepstral coefficients;sparse characteristics;frequency domain;iteratively reweighted ℓ2 minimization method;high-resolution MFCC codec;inverse shorttime Fourier transform magnitude algorithm;Signal processing algorithms;Speech processing;Mel frequency cepstral coefficient;Speech coding;Minimization;Speech reconstruction;MFCCs;Iteratively reweighted l2 minimization},\n  doi = {10.23919/EUSIPCO.2019.8902504},\n  issn = {2076-1465},\n  url={https://www.eurasip.org/Proceedings/Eusipco/eusipco2019/Proceedings/papers/1570533524.pdf},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n This paper presents an effective method to address the inverse problem of Mel-frequency cepstral analysis, and describes how to reconstruct the speech waveforms from Melfrequency cepstral coefficients (MFCCs) directly. To exploit the sparse characteristics of speech in the frequency domain, an iteratively reweighted ℓ2 minimization method is proposed to cope with the under-determined nature of the reconstruction problem. The lost phase information during Mel-frequency cepstral analysis procedure is recovered by the inverse shorttime Fourier transform magnitude algorithm. Experiments are conducted over the TIMIT database and evaluated by several different kinds of measures. Experimental results demonstrate that the proposed method recovers speech with high articulation and intelligibility. Specifically, it sounds very close to the original speech when using the high-resolution MFCCs, the average STOI, PESQ score reaches 93% and 4.0, respectively. This method could be easily used for MFCC codec at low bit rate.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Blind Reconstruction Algorithm for Level-Crossing Analog-to-Digital Conversion.\n \n \n \n \n\n\n \n Souloumiac, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902505,\n  author = {A. Souloumiac},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Blind Reconstruction Algorithm for Level-Crossing Analog-to-Digital Conversion},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Many techniques have been developed for spectral analysis and reconstruction of an analog signal based on a nonuniform set of samples. But, to the best of our knowledge, none is specifically adapted to Level-Crossing Analog-to-Digital Converters (LC-ADC). We propose in this article a reconstruction algorithm that takes advantage of the intrinsic quantification of the LC-ADC samples amplitudes to dramatically minimize the stability requirements of the analog levels and improve the converter global accuracy and resolution. We show in particular that spectral analysis is possible even if the levels amplitudes are unknown: they can be blindly estimated, jointly with the signal spectrum, up to harmless indeterminations of scale and offset. Simulations on synthetic signals demonstrate that the proposed algorithm outperforms the existing techniques.},\n  keywords = {analogue-digital conversion;blind source separation;signal reconstruction;signal resolution;signal sampling;spectral analysis;blind reconstruction algorithm;Level-Crossing Analog-to-Digital conversion;spectral analysis;analog signal;Level-Crossing Analog-to-Digital Converters;LC-ADC samples;signal spectrum;Standards;Spectral analysis;Analog-digital conversion;Reconstruction algorithms;Nonuniform sampling;Clocks;Europe;Level-Crossing Analog-to-Digital Conversion;Spectral Analysis;Nonuniform Sampling;Reconstruction;Categorical and Mixed Data;Subspaces Intersection;Principal Angles;Singular Value Decomposition},\n  doi = {10.23919/EUSIPCO.2019.8902505},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533738.pdf},\n}\n\n
\n
\n\n\n
\n Many techniques have been developed for spectral analysis and reconstruction of an analog signal based on a nonuniform set of samples. But, to the best of our knowledge, none is specifically adapted to Level-Crossing Analog-to-Digital Converters (LC-ADC). We propose in this article a reconstruction algorithm that takes advantage of the intrinsic quantification of the LC-ADC samples amplitudes to dramatically minimize the stability requirements of the analog levels and improve the converter global accuracy and resolution. We show in particular that spectral analysis is possible even if the levels amplitudes are unknown: they can be blindly estimated, jointly with the signal spectrum, up to harmless indeterminations of scale and offset. Simulations on synthetic signals demonstrate that the proposed algorithm outperforms the existing techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Alcoholic EEG Analysis Using Riemann Geometry Based Framework.\n \n \n \n \n\n\n \n Gopan K., G.; Sinha, N.; and Jayagopi, D. B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AlcoholicPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902506,\n  author = {G. {Gopan K.} and N. Sinha and D. B. Jayagopi},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Alcoholic EEG Analysis Using Riemann Geometry Based Framework},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Brain functioning is severely affected in long-term alcoholics. This degradation is reflected in Electroencephalographic signals(EEG) which are electrical signals in the brain generated due to the firing of neurons. These signals can be used to understand the changes in the brain of an alcoholic. In this work, Riemann geometry based classification framework is used to study changes in interdependencies across various brain regions in alcoholics. Publicly available data of 50 subjects(25 alcoholics, 25 control) with 10 trials each are used in this work. Spatial covariance matrices for empirically chosen channels are input to two classification scenarios. In the first scenario, covariance matrices are used as features to ”Minimum Distance to Mean classifier with geodesic filtering(fgMDM)” on the manifold. The highest mean accuracy obtained is 82.8% for the channel set of AF2 & P6. In the second scenario, the covariance matrices are mapped to tangent space and the resultant tangent vectors are used as features for Support Vector Machine with Radial Basis Function kernel. In this scenario, the highest mean accuracy obtained is 87.6% for the channel set FP1 & PO1. Both scenarios indicate significant changes across frontal lobe in comparison to the posterior lobes of the brain, in alcoholics. Changes in covariance matrices for the EEG, when the same stimulus is provided, indicate changes in brain functioning, consistent with alcoholism. Hence, Riemann geometry is a promising framework to study changes in brain region inter-dependencies, for subjects exposed to different brain-altering situations.},\n  keywords = {bioelectric potentials;covariance matrices;electroencephalography;medical signal processing;neurophysiology;radial basis function networks;signal classification;support vector machines;radial basis function kernel;brain region inter-dependencies;alcoholic EEG analysis;Electroencephalographic signals;electrical signals;Riemann geometry based classification framework;spatial covariance matrices;resultant tangent vectors;support vector machine;minimum distance to mean classifier with geodesic filtering;brain-altering situations;Electroencephalographic Signals;Alcoholic;Riemann Geometry;Tangent Space},\n  doi = {10.23919/EUSIPCO.2019.8902506},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533450.pdf},\n}\n\n
\n
\n\n\n
\n Brain functioning is severely affected in long-term alcoholics. This degradation is reflected in Electroencephalographic signals(EEG) which are electrical signals in the brain generated due to the firing of neurons. These signals can be used to understand the changes in the brain of an alcoholic. In this work, Riemann geometry based classification framework is used to study changes in interdependencies across various brain regions in alcoholics. Publicly available data of 50 subjects(25 alcoholics, 25 control) with 10 trials each are used in this work. Spatial covariance matrices for empirically chosen channels are input to two classification scenarios. In the first scenario, covariance matrices are used as features to ”Minimum Distance to Mean classifier with geodesic filtering(fgMDM)” on the manifold. The highest mean accuracy obtained is 82.8% for the channel set of AF2 & P6. In the second scenario, the covariance matrices are mapped to tangent space and the resultant tangent vectors are used as features for Support Vector Machine with Radial Basis Function kernel. In this scenario, the highest mean accuracy obtained is 87.6% for the channel set FP1 & PO1. Both scenarios indicate significant changes across frontal lobe in comparison to the posterior lobes of the brain, in alcoholics. Changes in covariance matrices for the EEG, when the same stimulus is provided, indicate changes in brain functioning, consistent with alcoholism. Hence, Riemann geometry is a promising framework to study changes in brain region inter-dependencies, for subjects exposed to different brain-altering situations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Path Loss at 5 GHz and 31 GHz for Two Distinct Indoor Airport Settings.\n \n \n \n \n\n\n \n Matolak, D. W.; Mohsen, M.; and Chen, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"PathPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902509,\n  author = {D. W. Matolak and M. Mohsen and J. Chen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Path Loss at 5 GHz and 31 GHz for Two Distinct Indoor Airport Settings},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Vehicular and indoor communications continue to grow. This pertains to applications in aviation as well, which is also experiencing rapid growth. Since a pre-requisite to reliable communication link design is accurate knowledge of the wireless channel, research on channel models for these environments is an active area of study, and this is the topic of this paper. In this work, we report on measurement and model results for propagation path loss in two indoor airport environments, in two frequency bands. The first environment is a typical small terminal building, with characteristics similar to indoor offices, and the second is a more unusual aircraft maintenance hangar. The hangar is a crowded environment with multiple aircraft and metallic objects. Our results are for both the 30 GHz band (specifically 31 GHz), which is being investigated for future 5th generation cellular and other applications, and for the 5 GHz band, for a comparison. Our results show that the airport terminal building exhibits path loss characteristics very similar to those of an indoor office environment, at both frequencies, and this is largely as expected. In contrast, the maintenance hangar path loss is less than that of non-line of sight terminal building regions, somewhat unexpectedly. We attribute this to the highly reflective hangar environment, which serves to compensate for reduced diffraction and increased blockage losses at 31 GHz.},\n  keywords = {aircraft maintenance;airports;cellular radio;indoor radio;microwave propagation;statistical analysis;wireless channels;reliable communication link design;wireless channel;channel models;propagation path loss;indoor airport environments;frequency bands;typical small terminal building;indoor offices;unusual aircraft maintenance hangar;crowded environment;multiple aircraft;airport terminal building exhibits path loss characteristics;indoor office environment;maintenance hangar path loss;sight terminal building regions;highly reflective hangar environment;blockage losses;distinct indoor airport settings;vehicular communications;indoor communications;pre-requisite;frequency 5.0 GHz;frequency 30.0 GHz;frequency 31.0 GHz;Airports;Buildings;Loss measurement;Maintenance engineering;Transmitters;Antenna measurements;Aircraft;propagation;wireless channel;millimeter wave},\n  doi = {10.23919/EUSIPCO.2019.8902509},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533814.pdf},\n}\n\n
\n
\n\n\n
\n Vehicular and indoor communications continue to grow. This pertains to applications in aviation as well, which is also experiencing rapid growth. Since a pre-requisite to reliable communication link design is accurate knowledge of the wireless channel, research on channel models for these environments is an active area of study, and this is the topic of this paper. In this work, we report on measurement and model results for propagation path loss in two indoor airport environments, in two frequency bands. The first environment is a typical small terminal building, with characteristics similar to indoor offices, and the second is a more unusual aircraft maintenance hangar. The hangar is a crowded environment with multiple aircraft and metallic objects. Our results are for both the 30 GHz band (specifically 31 GHz), which is being investigated for future 5th generation cellular and other applications, and for the 5 GHz band, for a comparison. Our results show that the airport terminal building exhibits path loss characteristics very similar to those of an indoor office environment, at both frequencies, and this is largely as expected. In contrast, the maintenance hangar path loss is less than that of non-line of sight terminal building regions, somewhat unexpectedly. We attribute this to the highly reflective hangar environment, which serves to compensate for reduced diffraction and increased blockage losses at 31 GHz.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Adversarial Super-Resolution Remedy for Radar Design Trade-offs.\n \n \n \n \n\n\n \n Armanious, K.; Abdulatif, S.; Aziz, F.; Schneider, U.; and Yang, B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902510,\n  author = {K. Armanious and S. Abdulatif and F. Aziz and U. Schneider and B. Yang},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {An Adversarial Super-Resolution Remedy for Radar Design Trade-offs},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Radar is of vital importance in many fields, such as autonomous driving, safety and surveillance applications. However, it suffers from stringent constraints on its design parametrization leading to multiple trade-offs. For example, the bandwidth in FMCW radars is inversely proportional with both the maximum unambiguous range and range resolution. In this work, we introduce a new method for circumventing radar design trade-offs. We propose the use of recent advances in computer vision, more specifically generative adversarial networks (GANs), to enhance low-resolution radar acquisitions into higher resolution counterparts while maintaining the advantages of the low-resolution parametrization. The capability of the proposed method was evaluated on the velocity resolution and range-azimuth trade-offs in micro-Doppler signatures and FMCW uniform linear array (ULA) radars, respectively.},\n  keywords = {computer vision;CW radar;direction-of-arrival estimation;Doppler radar;FM radar;radar imaging;radar resolution;safety;surveillance applications;design parametrization;multiple trade-offs;FMCW radars;maximum unambiguous range;range resolution;radar design trade-offs;generative adversarial networks;low-resolution radar acquisitions;low-resolution parametrization;velocity resolution;FMCW uniform linear array radars;adversarial super-resolution remedy;range-azimuth trade-offs;computer vision;Radar imaging;Legged locomotion;Generators;Generative adversarial networks;Sensors;Radar;Super-resolution;Micro-Doppler;MIMO;Range-azimuth;Convolutional neural network;CNN;Generative adversarial networks;GAN;Remote sensing},\n  doi = {10.23919/EUSIPCO.2019.8902510},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533098.pdf},\n}\n\n
\n
\n\n\n
\n Radar is of vital importance in many fields, such as autonomous driving, safety and surveillance applications. However, it suffers from stringent constraints on its design parametrization leading to multiple trade-offs. For example, the bandwidth in FMCW radars is inversely proportional with both the maximum unambiguous range and range resolution. In this work, we introduce a new method for circumventing radar design trade-offs. We propose the use of recent advances in computer vision, more specifically generative adversarial networks (GANs), to enhance low-resolution radar acquisitions into higher resolution counterparts while maintaining the advantages of the low-resolution parametrization. The capability of the proposed method was evaluated on the velocity resolution and range-azimuth trade-offs in micro-Doppler signatures and FMCW uniform linear array (ULA) radars, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Measure-Transformed Gaussian Quasi Score Test in the Presence of Nuisance Parameters.\n \n \n \n \n\n\n \n Todros, K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Measure-TransformedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902512,\n  author = {K. Todros},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Measure-Transformed Gaussian Quasi Score Test in the Presence of Nuisance Parameters},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we extend the measure-transformed Gaussian quasi score test (MT-GQST) for the case where nuisance parameters are present. The proposed extension is based on a zero-expectation property of a partial Gaussian quasi score function under the transformed null distribution. The nuisance parameters are estimated under the null hypothesis via the measure-transformed Gaussian quasi MLE. In the paper, we analyze the effect of the probability measure-transformation on the asymptotic detection performance of the extended MT-GQST. This leads to a data-driven procedure for selection of the generating function of the considered transform, called MT-function, which, in practice, weights the data points. Furthermore, we provide conditions on the MT-function to ensure stability of the asymptotic false-alarm-rate in the presence of noisy outliers. The extended MT-GQST is applied for testing a vector parameter of interest comprising a noisy multivariate linear data model in the presence of nuisance parameters. Simulation study illustrates its advantages over other robust detectors.},\n  keywords = {probability;measure-transformed Gaussian quasiscore test;nuisance parameters;partial Gaussian quasiscore function;transformed null distribution;measure-transformed Gaussian quasiMLE;probability measure-transformation;extended MT-GQST;MT-function;noisy multivariate linear data model;Pollution measurement;Covariance matrices;Transforms;Detectors;Testing;Data models;Europe},\n  doi = {10.23919/EUSIPCO.2019.8902512},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532493.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we extend the measure-transformed Gaussian quasi score test (MT-GQST) for the case where nuisance parameters are present. The proposed extension is based on a zero-expectation property of a partial Gaussian quasi score function under the transformed null distribution. The nuisance parameters are estimated under the null hypothesis via the measure-transformed Gaussian quasi MLE. In the paper, we analyze the effect of the probability measure-transformation on the asymptotic detection performance of the extended MT-GQST. This leads to a data-driven procedure for selection of the generating function of the considered transform, called MT-function, which, in practice, weights the data points. Furthermore, we provide conditions on the MT-function to ensure stability of the asymptotic false-alarm-rate in the presence of noisy outliers. The extended MT-GQST is applied for testing a vector parameter of interest comprising a noisy multivariate linear data model in the presence of nuisance parameters. Simulation study illustrates its advantages over other robust detectors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint User Grouping and Power Allocation for MISO Systems: Learning to Schedule.\n \n \n \n \n\n\n \n Yuan, Y.; Vu, T. X.; Lei, L.; Chatzinotas, S.; and Ottersten, B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902514,\n  author = {Y. Yuan and T. X. Vu and L. Lei and S. Chatzinotas and B. Ottersten},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint User Grouping and Power Allocation for MISO Systems: Learning to Schedule},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we address ajoint user scheduling and power allocation problem from a machine-learning perspective in order to efficiently minimize data delivery time for multiple-input single-output (MISO) systems. The joint optimization problem is formulated as a mixed-integer and non-linear programming problem, such that the data requests can be delivered by minimum delay, and the power consumption can meet practical requirements. For solving the problem to the global optimum, we provide a solution to decouple the scheduling and power optimization. Due to the problem's inherent hardness, the optimal solution requires exponential complexity and time in computations. To enable an efficient and competitive solution, we propose a learning-based approach to reduce data delivery time and solution's computational delay, where a deep neural network is trained to learn and decide how to optimize user scheduling. In numerical study, the developed optimal solution can be used for performance benchmarking and generating training data for the proposed learning approach. The results demonstrate the developed learning based approach is able to significantly improve the computation efficiency while achieves a near optimal performance.},\n  keywords = {integer programming;learning (artificial intelligence);neural nets;nonlinear programming;scheduling;joint user grouping;MISO systems;user scheduling;power allocation problem;machine-learning perspective;data delivery time;multiple-input single-output systems;joint optimization problem;mixed-integer;nonlinear programming problem;data requests;power consumption;power optimization;inherent hardness;exponential complexity;learning-based approach;computational delay;developed optimal solution;performance benchmarking;training data;learning based approach;computation efficiency;Resource management;Processor scheduling;Power control;Optimal scheduling;MISO communication;Interference;Time minimization;machine learning;power allocation;user scheduling.},\n  doi = {10.23919/EUSIPCO.2019.8902514},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533780.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we address ajoint user scheduling and power allocation problem from a machine-learning perspective in order to efficiently minimize data delivery time for multiple-input single-output (MISO) systems. The joint optimization problem is formulated as a mixed-integer and non-linear programming problem, such that the data requests can be delivered by minimum delay, and the power consumption can meet practical requirements. For solving the problem to the global optimum, we provide a solution to decouple the scheduling and power optimization. Due to the problem's inherent hardness, the optimal solution requires exponential complexity and time in computations. To enable an efficient and competitive solution, we propose a learning-based approach to reduce data delivery time and solution's computational delay, where a deep neural network is trained to learn and decide how to optimize user scheduling. In numerical study, the developed optimal solution can be used for performance benchmarking and generating training data for the proposed learning approach. The results demonstrate the developed learning based approach is able to significantly improve the computation efficiency while achieves a near optimal performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Identification of Vector Autoregressive Models with Granger and Stability Constraints.\n \n \n \n \n\n\n \n Dumitrescu, B.; Giurcăneanu, C. D.; and Ding, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"IdentificationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902516,\n  author = {B. Dumitrescu and C. D. Giurcăneanu and Y. Ding},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Identification of Vector Autoregressive Models with Granger and Stability Constraints},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this work, we introduce an iterative method for the estimation of vector autoregressive (VAR) models with Granger and stability constraints. When the order of the model (p) and the Granger sparsity pattern (GSP) are not known, the newly proposed method is integrated in a two-stage approach. An information theoretic (IT) criterion is used in the first stage for selecting the value of p. In the second stage, a set of possible candidates for GSP are produced by applying the Wald test, and the best one is chosen with an IT criterion. In experiments with synthetic data, we demonstrate that our method yields more accurate forecasts than the state-of-art algorithm that is based on convex optimization and fits models which are guaranteed to be stable.},\n  keywords = {autoregressive processes;convex programming;information theory;iterative methods;vectors;vector autoregressive models;iterative method;stability constraints;Granger sparsity pattern;GSP;two-stage approach;information theoretic criterion;VAR models;Wald test;IT criterion;Stability criteria;Estimation;Reactive power;Computational modeling;Convex functions;Numerical stability;Vector autoregressive models;Granger causality;stability;convex optimization;information theoretic criteria},\n  doi = {10.23919/EUSIPCO.2019.8902516},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530664.pdf},\n}\n\n
\n
\n\n\n
\n In this work, we introduce an iterative method for the estimation of vector autoregressive (VAR) models with Granger and stability constraints. When the order of the model (p) and the Granger sparsity pattern (GSP) are not known, the newly proposed method is integrated in a two-stage approach. An information theoretic (IT) criterion is used in the first stage for selecting the value of p. In the second stage, a set of possible candidates for GSP are produced by applying the Wald test, and the best one is chosen with an IT criterion. In experiments with synthetic data, we demonstrate that our method yields more accurate forecasts than the state-of-art algorithm that is based on convex optimization and fits models which are guaranteed to be stable.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian Robust Signal Subspace Estimation in Non-Gaussian Environment.\n \n \n \n \n\n\n \n Abdallah, R. B.; Breloy, A.; El Korso, M. N.; and Lautru, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902518,\n  author = {R. B. Abdallah and A. Breloy and M. N. {El Korso} and D. Lautru},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian Robust Signal Subspace Estimation in Non-Gaussian Environment},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we focus on the problem of low rank signal subspace estimation. Specifically, we derive new subspace estimator using the Bayesian minimum mean square distance formulation. This approach is useful to overcome the issues of low sample support and/or low signal to noise ratio. In order to be robust to various signal distributions, the proposed Bayesian estimator is derived for a model of sources plus outliers, following both a compound Gaussian distribution. In addition, the commonly assumed complex invariant Bingham distribution is used as prior for the subspace basis. Finally, the interest of the proposed approach is illustrated by numerical simulations and with a real data set for a space time adaptive processing (STAP) application.},\n  keywords = {Bayes methods;estimation theory;Gaussian distribution;least mean squares methods;space-time adaptive processing;Bayesian robust signal subspace estimation;nonGaussian environment;low rank signal subspace estimation;subspace estimator;square distance formulation;low sample support;signal distributions;compound Gaussian distribution;subspace basis;complex invariant Bingham distribution;Bayesian minimum mean square distance formulation;low signal to noise ratio;numerical simulations;space time adaptive processing application;STAP application;Estimation;Data models;Bayes methods;Signal processing algorithms;Signal to noise ratio;Europe;Subspace estimation;Bayesian estimation;minimum mean square distance;compound Gaussian;complex invariant Bingham distribution;STAP.},\n  doi = {10.23919/EUSIPCO.2019.8902518},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533327.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we focus on the problem of low rank signal subspace estimation. Specifically, we derive new subspace estimator using the Bayesian minimum mean square distance formulation. This approach is useful to overcome the issues of low sample support and/or low signal to noise ratio. In order to be robust to various signal distributions, the proposed Bayesian estimator is derived for a model of sources plus outliers, following both a compound Gaussian distribution. In addition, the commonly assumed complex invariant Bingham distribution is used as prior for the subspace basis. Finally, the interest of the proposed approach is illustrated by numerical simulations and with a real data set for a space time adaptive processing (STAP) application.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-Complexity optimization for Direction-of-Arrival Estimation via Approximate Message Passing.\n \n \n \n \n\n\n \n Zhang, X.; Huo, K.; Zhang, S.; Liu, Y.; Jiang, W.; and Li, X.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Low-ComplexityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902519,\n  author = {X. Zhang and K. Huo and S. Zhang and Y. Liu and W. Jiang and X. Li},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Low-Complexity optimization for Direction-of-Arrival Estimation via Approximate Message Passing},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Sparsity-inducing techniques have been introduced into direction of arrival (DOA) estimation and achieved a great success in performance. However the computational complexity of the conventional sparsity-inducing techniques is prohibitively high and thus prevents such methods from application. In this paper, we propose a low-complexity DOA estimation algorithm based on approximate message passing (AMP). Derived from the loopy belief propagation, AMP is a fast algorithm to obtain the posterior distribution of the signal. The proposed algorithm combines the AMP with expectation maximization (EM) technique to adaptively learn the hyper-parameters in the Gaussian priori of the signal. Closed-form update rule of signal prior variance is derived using fix-point method, an estimator of sources number and an empirical update rule for noise variance are also derived. Compared with the state-of-the-art algorithms, the proposed algorithm reduces the computational complexity by several orders of magnitude, while obtaining comparable performance of DOA estimation. Numerical simulation demonstrates the advantages of the proposed algorithm.},\n  keywords = {approximation theory;computational complexity;direction-of-arrival estimation;expectation-maximisation algorithm;message passing;optimisation;AMP;expectation maximization technique;closed-form update rule;signal prior variance;fix-point method;empirical update rule;computational complexity;low-complexity optimization;direction-of-arrival estimation;approximate message passing;conventional sparsity-inducing techniques;low-complexity DOA estimation algorithm;loopy belief propagation;EM technique;signal posterior distribution;hyperparameter learning;noise variance;numerical simulation;Signal processing algorithms;Estimation;Direction-of-arrival estimation;Approximation algorithms;Message passing;Belief propagation;Europe;Direction-of-arrival estimation;approximate massage passing;expectation maximization},\n  doi = {10.23919/EUSIPCO.2019.8902519},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570527488.pdf},\n}\n\n
\n
\n\n\n
\n Sparsity-inducing techniques have been introduced into direction of arrival (DOA) estimation and achieved a great success in performance. However the computational complexity of the conventional sparsity-inducing techniques is prohibitively high and thus prevents such methods from application. In this paper, we propose a low-complexity DOA estimation algorithm based on approximate message passing (AMP). Derived from the loopy belief propagation, AMP is a fast algorithm to obtain the posterior distribution of the signal. The proposed algorithm combines the AMP with expectation maximization (EM) technique to adaptively learn the hyper-parameters in the Gaussian priori of the signal. Closed-form update rule of signal prior variance is derived using fix-point method, an estimator of sources number and an empirical update rule for noise variance are also derived. Compared with the state-of-the-art algorithms, the proposed algorithm reduces the computational complexity by several orders of magnitude, while obtaining comparable performance of DOA estimation. Numerical simulation demonstrates the advantages of the proposed algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Predicting the Success of Blastocyst Implantation from Morphokinetic Parameters Estimated through CNNs and Sum of Absolute Differences.\n \n \n \n \n\n\n \n Silva-Rodríguez, J.; Colomer, A.; Meseguer, M.; and Naranjo, V.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"PredictingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902520,\n  author = {J. Silva-Rodríguez and A. Colomer and M. Meseguer and V. Naranjo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Predicting the Success of Blastocyst Implantation from Morphokinetic Parameters Estimated through CNNs and Sum of Absolute Differences},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The process of In Vitro Fertilization deals nowadays with the challenge of selecting viable embryos with the highest probability of success in the implantation. In this topic, we present a computer-vision-based system to analyze the videos related to days of embryo development which automatically extracts morphokinetic features and estimates the success of implantation. A robust algorithm to detect the embryo in the culture image is proposed to avoid artifacts. Then, the ability of Convolutional Neural Networks (CNNs) for predicting the number of cells per frame is novelty combined with the Sum of Absolute Differences (SAD) signal in charge of capturing the amount of intensity changes during the whole video. With this hybrid proposal, we obtain an average accuracy of 93% in the detection of the number of cells per image, resulting in a precise and robust estimation of the morphokinetic parameters. With those features, we train a predictive model based on Random Forest classifier able to estimate the success in the implantation of a blastocyst with more than 60% of precision.},\n  keywords = {cellular biophysics;computer vision;convolutional neural nets;feature extraction;image classification;medical image processing;probability;random forests;culture image;convolutional neural networks;CNNs;robust estimation;morphokinetic parameters;predictive model;blastocyst implantation;vitro fertilization;probability;computer-vision-based system;embryo development;sum of absolute differences signal;videos;SAD signal;random forest classifier;Embryo;Videos;Predictive models;Timing;Europe;Feature extraction;Embryo;IVF;morphokinetic parameters;implantation;image processing;machine learning;deep learning},\n  doi = {10.23919/EUSIPCO.2019.8902520},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533611.pdf},\n}\n\n
\n
\n\n\n
\n The process of In Vitro Fertilization deals nowadays with the challenge of selecting viable embryos with the highest probability of success in the implantation. In this topic, we present a computer-vision-based system to analyze the videos related to days of embryo development which automatically extracts morphokinetic features and estimates the success of implantation. A robust algorithm to detect the embryo in the culture image is proposed to avoid artifacts. Then, the ability of Convolutional Neural Networks (CNNs) for predicting the number of cells per frame is novelty combined with the Sum of Absolute Differences (SAD) signal in charge of capturing the amount of intensity changes during the whole video. With this hybrid proposal, we obtain an average accuracy of 93% in the detection of the number of cells per image, resulting in a precise and robust estimation of the morphokinetic parameters. With those features, we train a predictive model based on Random Forest classifier able to estimate the success in the implantation of a blastocyst with more than 60% of precision.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evaluation of Analog Encoding for Multi-User Wireless Transmission of Still Images.\n \n \n \n \n\n\n \n Balsa, J.; Fresnedo, Ó.; Domínguez-Bolaño, T.; García-Naya, J. A.; and Castedo, L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EvaluationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902521,\n  author = {J. Balsa and Ó. Fresnedo and T. Domínguez-Bolaño and J. A. García-Naya and L. Castedo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Evaluation of Analog Encoding for Multi-User Wireless Transmission of Still Images},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we present an analog system to transmit still images over multiuser wireless channels and an empirical comparison with a representative digital scheme. The tests were carried out considering a multiple access channel (MAC) where two single-antenna users transmit their images to a two-antennas centralized receiver. The analog system encodes the source images using analog joint source-channel coding (JSCC) mappings and the resulting symbols are packed into an orthogonal frequency-division multiplexing (OFDM) frame. The quality of the analog received image is evaluated with the structural similarity (SSIM) and a digital image with the same quality is generated, which is encoded into an OFDM frame and transmitted over the same channel estimated as the analog one. The aim of this work is to compare the transmission times of the two systems and evaluate the suitability of the analog encoding techniques for the transmission of still images.},\n  keywords = {channel coding;channel estimation;combined source-channel coding;encoding;multi-access systems;multiplexing;OFDM modulation;wireless channels;empirical comparison;representative digital scheme;multiple access channel;single-antenna users;two-antennas centralized receiver;analog system;source images;analog joint source-channel coding mappings;orthogonal frequency-division multiplexing frame;analog received image;digital image;OFDM frame;transmission times;analog encoding;multiuser wireless transmission;multiuser wireless channels;OFDM;Receivers;Signal to noise ratio;Encoding;Transform coding;Transmitters;Standards},\n  doi = {10.23919/EUSIPCO.2019.8902521},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532563.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present an analog system to transmit still images over multiuser wireless channels and an empirical comparison with a representative digital scheme. The tests were carried out considering a multiple access channel (MAC) where two single-antenna users transmit their images to a two-antennas centralized receiver. The analog system encodes the source images using analog joint source-channel coding (JSCC) mappings and the resulting symbols are packed into an orthogonal frequency-division multiplexing (OFDM) frame. The quality of the analog received image is evaluated with the structural similarity (SSIM) and a digital image with the same quality is generated, which is encoded into an OFDM frame and transmitted over the same channel estimated as the analog one. The aim of this work is to compare the transmission times of the two systems and evaluate the suitability of the analog encoding techniques for the transmission of still images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n CNN-based Multichannel End-to-End Speech Recognition for Everyday Home Environments*.\n \n \n \n \n\n\n \n Yalta, N.; Watanabe, S.; Hori, T.; Nakadai, K.; and Ogata, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CNN-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902524,\n  author = {N. Yalta and S. Watanabe and T. Hori and K. Nakadai and T. Ogata},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {CNN-based Multichannel End-to-End Speech Recognition for Everyday Home Environments*},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Casual conversations involving multiple speakers and noises from surrounding devices are common in everyday environments, which degrades the performances of automatic speech recognition systems. These challenging characteristics of environments are the target of the CHiME-5 challenge. By employing a convolutional neural network (CNN)-based multichannel end-to-end speech recognition system, this study attempts to overcome the presents difficulties in everyday environments. The system comprises of an attention-based encoder-decoder neural network that directly generates a text as an output from a sound input. The multichannel CNN encoder, which uses residual connections and batch renormalization, is trained with augmented data, including white noise injection. The experimental results show that the word error rate is reduced by 8.5% and 0.6% absolute from a single channel end-to-end and the best baseline (LF-MMI TDNN) on the CHiME-5 corpus, respectively.},\n  keywords = {convolutional neural nets;decoding;speech coding;speech recognition;text analysis;white noise;convolutional neural network-based multichannel end-to-end speech recognition system;attention-based encoder-decoder neural network;multichannel CNN encoder;CNN-based multichannel end-to-end speech recognition;home environments;multiple speakers;automatic speech recognition systems;CHiME-5 challenge;Training;Speech recognition;Decoding;Task analysis;Microphone arrays;Noise measurement;End-to-end speech recognition;Multichannel;Residual networks},\n  doi = {10.23919/EUSIPCO.2019.8902524},\n  issn = {2076-1465},\n  url={https://www.eurasip.org/Proceedings/Eusipco/eusipco2019/Proceedings/papers/1570531927.pdf},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Casual conversations involving multiple speakers and noises from surrounding devices are common in everyday environments, which degrades the performances of automatic speech recognition systems. These challenging characteristics of environments are the target of the CHiME-5 challenge. By employing a convolutional neural network (CNN)-based multichannel end-to-end speech recognition system, this study attempts to overcome the presents difficulties in everyday environments. The system comprises of an attention-based encoder-decoder neural network that directly generates a text as an output from a sound input. The multichannel CNN encoder, which uses residual connections and batch renormalization, is trained with augmented data, including white noise injection. The experimental results show that the word error rate is reduced by 8.5% and 0.6% absolute from a single channel end-to-end and the best baseline (LF-MMI TDNN) on the CHiME-5 corpus, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Non-Local Restoration Of Sparse 3d Single-Photon Data.\n \n \n \n \n\n\n \n Chen, S.; Halimi, A.; Ren, X.; McCarthy, A.; Su, X.; Buller, G. S.; and McLaughlin, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Non-LocalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902525,\n  author = {S. Chen and A. Halimi and X. Ren and A. McCarthy and X. Su and G. S. Buller and S. McLaughlin},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Non-Local Restoration Of Sparse 3d Single-Photon Data},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper presents a new algorithm for the non-local restoration of single-photon 3-Dimensional Lidar images acquired in the photon starved regime or with a reduced number of scanned spatial points (pixels). The algorithm alternates between two steps: evaluation of the spatial correlations between pixels using a graph, then restore the depth and reflectivity images by their spatial correlations. To reduce the computational cost associated with the graph, we adopt a non-uniform sampling approach, where bigger patches are assigned to homogeneous regions and smaller ones to heterogeneous regions. The restoration of 3D images is achieved by minimizing a cost function accounting for the data Poisson statistics and the non-local spatial correlations between patches. This minimization problem is efficiently solved using the alternating direction method of multipliers (ADMM) that presents fast convergence properties. Results on real Lidar data show the benefits of the proposed algorithm in improving the quality of the estimated depth images, especially in photon starved cases, which can contain a reduced number of photons.},\n  keywords = {graph theory;image reconstruction;image resolution;image restoration;minimisation;optical information processing;optical radar;sampling methods;stochastic processes;nonlocal restoration;scanned spatial points;nonuniform sampling approach;cost function;data Poisson statistics;nonlocal spatial correlations;Lidar data;estimated depth images;photon starved cases;single-photon 3-dimensional Lidar images;sparse 3D single-photon data;reflectivity image restoration;3D image restoration;spatial correlation evaluation;computational cost reduction;minimization problem;alternating direction method of multipliers;ADMM;fast convergence properties;Image restoration;Correlation;Photonics;Laser radar;Three-dimensional displays;Imaging;Computational efficiency;3D Lidar imaging;image restoration;Poisson statistics;graph;non-uniform sampling},\n  doi = {10.23919/EUSIPCO.2019.8902525},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533012.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a new algorithm for the non-local restoration of single-photon 3-Dimensional Lidar images acquired in the photon starved regime or with a reduced number of scanned spatial points (pixels). The algorithm alternates between two steps: evaluation of the spatial correlations between pixels using a graph, then restore the depth and reflectivity images by their spatial correlations. To reduce the computational cost associated with the graph, we adopt a non-uniform sampling approach, where bigger patches are assigned to homogeneous regions and smaller ones to heterogeneous regions. The restoration of 3D images is achieved by minimizing a cost function accounting for the data Poisson statistics and the non-local spatial correlations between patches. This minimization problem is efficiently solved using the alternating direction method of multipliers (ADMM) that presents fast convergence properties. Results on real Lidar data show the benefits of the proposed algorithm in improving the quality of the estimated depth images, especially in photon starved cases, which can contain a reduced number of photons.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Robust Group-Sparse Proportionate Affine Projection Algorithm with Maximum Correntropy Criterion for Channel Estimation.\n \n \n \n \n\n\n \n Jiang, Z.; Li, Y.; and Zakharov, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902526,\n  author = {Z. Jiang and Y. Li and Y. Zakharov},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Robust Group-Sparse Proportionate Affine Projection Algorithm with Maximum Correntropy Criterion for Channel Estimation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In many engineering applications, noise often exhibits strongly impulsive characteristics, while the conventional adaptive filtering (AF) algorithms are less robust to the impulsive noise. The AF algorithms based on maximum correntropy criterion (MCC) have been devised to effectively enhance the adaptive estimation performance in impulsive noise environments. In this paper, a robust group-sparse proportionate affine projection (RGS-PAP) algorithm based on MCC is proposed for estimating group-sparse channels which often occur in network echo paths and satellite communications channels. The constructed RGSPAP algorithm is derived via exerting a mixed l2,1 norm constraint of AF weights into the updating equation of the affine projection algorithm with MCC to utilize the groupsparse characteristics. The developed RGS-PAP algorithm is analyzed by setting up various simulation experiments to verify its robustness and effectiveness. Simulation results indicate that the proposed RGS-PAP algorithm provides faster convergence and lower estimation bias compared with other algorithms under various input signals in impulse noise environments.},\n  keywords = {adaptive estimation;adaptive filters;channel estimation;echo suppression;impulse noise;satellite communication;wireless channels;robust group-sparse proportionate affine projection algorithm;maximum correntropy criterion;channel estimation;engineering applications;impulsive characteristics;conventional adaptive filtering algorithms;AF algorithms;MCC;adaptive estimation performance;impulsive noise environments;estimating group-sparse channels;satellite communications channels;AF weights;lower estimation bias;impulse noise environments;RGS-PAP algorithm;Signal processing algorithms;Channel estimation;Mathematical model;Cost function;Convergence;Europe;Signal processing;Channel estimation;maximum correntropy criterion;PAP algorithm;mixed l2;1 norm;impulse noise environments},\n  doi = {10.23919/EUSIPCO.2019.8902526},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570527495.pdf},\n}\n\n
\n
\n\n\n
\n In many engineering applications, noise often exhibits strongly impulsive characteristics, while the conventional adaptive filtering (AF) algorithms are less robust to the impulsive noise. The AF algorithms based on maximum correntropy criterion (MCC) have been devised to effectively enhance the adaptive estimation performance in impulsive noise environments. In this paper, a robust group-sparse proportionate affine projection (RGS-PAP) algorithm based on MCC is proposed for estimating group-sparse channels which often occur in network echo paths and satellite communications channels. The constructed RGSPAP algorithm is derived via exerting a mixed l2,1 norm constraint of AF weights into the updating equation of the affine projection algorithm with MCC to utilize the groupsparse characteristics. The developed RGS-PAP algorithm is analyzed by setting up various simulation experiments to verify its robustness and effectiveness. Simulation results indicate that the proposed RGS-PAP algorithm provides faster convergence and lower estimation bias compared with other algorithms under various input signals in impulse noise environments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n GPR Antenna localization based on A-Scans.\n \n \n \n \n\n\n \n Skartados, E.; Kargakos, A.; Tsiogas, E.; Kostavelis, I.; Giakoumis, D.; and Tzovaras, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"GPRPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902528,\n  author = {E. Skartados and A. Kargakos and E. Tsiogas and I. Kostavelis and D. Giakoumis and D. Tzovaras},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {GPR Antenna localization based on A-Scans},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Automated subsurface mapping with data obtained from Ground Penetrating Radar (GPR) is essential for the construction services. So far, significant progress has been achieved in this domain by integrating such sensors with robotic platforms allowing large scale autonomous subsurface mapping. The paper at hand tackles the challenging issue of self-localization of a GPR antenna in a known subsurface map by utilizing solely GPR measurements. This is achieved by isolating spatiotemporal salient regions on consecutive GPR traces. These regions are represented by utilizing the coefficients of the Discrete Wavelet Transform (DWT) decomposition. Matched representations indicate meaningful tracked regions on the GPR traces that correspond to a fixed time window of data recording. Tracked regions are encoded in the form a vector that is treated as an observation within a particle filtering framework and is further processed to estimate the GPR sensor pose, given (i) a known subsurface map (ii) a simulated GPR model and (iii) priors in the GPR motion model. The GPR antenna self-localization approach has been assessed with real data and exhibited promising results, proving the ability of the proposed method to perform subsurface localization, exploiting only GPR sensor measurements.},\n  keywords = {discrete wavelet transforms;ground penetrating radar;landmine detection;mobile robots;particle filtering (numerical methods);radar antennas;GPR antenna localization;automated subsurface mapping;Ground Penetrating Radar;construction services;robotic platforms;scale autonomous subsurface mapping;hand tackles;known subsurface map;solely GPR measurements;spatiotemporal salient regions;consecutive GPR traces;Discrete Wavelet Transform decomposition;meaningful tracked regions;data recording;GPR motion model;self-localization approach;subsurface localization;GPR sensor measurements;Discrete wavelet transforms;Antenna measurements;Surface topography;Antennas;Robot sensing systems},\n  doi = {10.23919/EUSIPCO.2019.8902528},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533895.pdf},\n}\n\n
\n
\n\n\n
\n Automated subsurface mapping with data obtained from Ground Penetrating Radar (GPR) is essential for the construction services. So far, significant progress has been achieved in this domain by integrating such sensors with robotic platforms allowing large scale autonomous subsurface mapping. The paper at hand tackles the challenging issue of self-localization of a GPR antenna in a known subsurface map by utilizing solely GPR measurements. This is achieved by isolating spatiotemporal salient regions on consecutive GPR traces. These regions are represented by utilizing the coefficients of the Discrete Wavelet Transform (DWT) decomposition. Matched representations indicate meaningful tracked regions on the GPR traces that correspond to a fixed time window of data recording. Tracked regions are encoded in the form a vector that is treated as an observation within a particle filtering framework and is further processed to estimate the GPR sensor pose, given (i) a known subsurface map (ii) a simulated GPR model and (iii) priors in the GPR motion model. The GPR antenna self-localization approach has been assessed with real data and exhibited promising results, proving the ability of the proposed method to perform subsurface localization, exploiting only GPR sensor measurements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the Physical Layer Security of IoT Devices over Satellite.\n \n \n \n \n\n\n \n Bas, J.; and Pérez-Neira, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902531,\n  author = {J. Bas and A. Pérez-Neira},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {On the Physical Layer Security of IoT Devices over Satellite},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The security in satellite communications is a key issue due to the large footprint of the beams. This is specially critical in IoT devices that transmit data directly to satellite. Take into account that IoT devices are characterized by transmitting packets of short length. Consequently, it means that it is not feasible to augment the security level of the IoT packets via complex cryptographic algorithms. Otherwise, their packet lengths may be increased in a non-negligible way which could augment their collision probabilities, latencies and energy consumptions. For this reason, this paper proposes to take advantage of the time-packing technique. By doing so, it is possible to use the overlapping degree among the pulse-shapes to boost the secrecy-capacity. In particular, the overlapping degree between the pulse-shapes introduces an artificial interference that degrades the eavesdropper's channel. In this regard, it is necessary to highlight that there is a residual co-channel interference in the satellite beams. So, it means that these two sources of impairments make difficult to estimate the legitimate user's transmission parameters by the eavesdropper.},\n  keywords = {cochannel interference;cryptography;Internet of Things;satellite communication;physical layer security;IoT devices;satellite communications;security level;IoT packets;complex cryptographic algorithms;packet lengths;overlapping degree;pulse-shapes;satellite beams;Satellites;Security;Mutual information;Interchannel interference;Entropy;Europe;IoT;Satellite;Physical-Layer security;High-Spectral Efficient Systems},\n  doi = {10.23919/EUSIPCO.2019.8902531},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533853.pdf},\n}\n\n
\n
\n\n\n
\n The security in satellite communications is a key issue due to the large footprint of the beams. This is specially critical in IoT devices that transmit data directly to satellite. Take into account that IoT devices are characterized by transmitting packets of short length. Consequently, it means that it is not feasible to augment the security level of the IoT packets via complex cryptographic algorithms. Otherwise, their packet lengths may be increased in a non-negligible way which could augment their collision probabilities, latencies and energy consumptions. For this reason, this paper proposes to take advantage of the time-packing technique. By doing so, it is possible to use the overlapping degree among the pulse-shapes to boost the secrecy-capacity. In particular, the overlapping degree between the pulse-shapes introduces an artificial interference that degrades the eavesdropper's channel. In this regard, it is necessary to highlight that there is a residual co-channel interference in the satellite beams. So, it means that these two sources of impairments make difficult to estimate the legitimate user's transmission parameters by the eavesdropper.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dempster-Shafer Theory for Fusing Face Morphing Detectors.\n \n \n \n \n\n\n \n Makrushin, A.; Kraetzer, C.; Dittmann, J.; Seibold, C.; Hilsmann, A.; and Eisert, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Dempster-ShaferPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902533,\n  author = {A. Makrushin and C. Kraetzer and J. Dittmann and C. Seibold and A. Hilsmann and P. Eisert},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Dempster-Shafer Theory for Fusing Face Morphing Detectors},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Revealing that a human face on a biometric image is a mixture of two or more faces is of immense importance for document issuing authorities and document checking services. If not done, several persons can use the same photo-ID document for identity verification without being condemned. The development of automated face morphing detectors is currently in its early phase. The detectors reported so far are not mature for the market which is reflected in high error rates when tested with “unseen” data. Here, we demonstrate that fusion of several by far non-optimal detectors may lead to significant improvement of detection accuracy compared to that of individual detectors. Among the examined fusion approaches, Dempster's Rule of combination has the best accuracy allowing for coherent decision making even with contradicting decisions of individual detectors.},\n  keywords = {biometrics (access control);decision making;face recognition;inference mechanisms;sensor fusion;uncertainty handling;Dempster rule;Dempster-Shafer theory;human face;biometric image;document issuing authorities;document checking services;photo-ID document;identity verification;automated face morphing detectors;high error rates;nonoptimal detectors;face morphing detectors;Erbium;face morphing attack;morphing detection;fusion},\n  doi = {10.23919/EUSIPCO.2019.8902533},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533653.pdf},\n}\n\n
\n
\n\n\n
\n Revealing that a human face on a biometric image is a mixture of two or more faces is of immense importance for document issuing authorities and document checking services. If not done, several persons can use the same photo-ID document for identity verification without being condemned. The development of automated face morphing detectors is currently in its early phase. The detectors reported so far are not mature for the market which is reflected in high error rates when tested with “unseen” data. Here, we demonstrate that fusion of several by far non-optimal detectors may lead to significant improvement of detection accuracy compared to that of individual detectors. Among the examined fusion approaches, Dempster's Rule of combination has the best accuracy allowing for coherent decision making even with contradicting decisions of individual detectors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Secure Dictionary Learning for Sparse Representation.\n \n \n \n \n\n\n \n Nakachi, T.; Bandoh, Y.; and Kiya, H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SecurePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902535,\n  author = {T. Nakachi and Y. Bandoh and H. Kiya},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Secure Dictionary Learning for Sparse Representation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we propose secure dictionary learning for sparse representation based on a random unitary transform. Edge cloud computing is now spreading to many application fields including services that use sparse coding. This situation raises many new privacy concerns. The proposed scheme provides practical MOD and K-SVD schemes that allow computation on encrypted signals. We prove, theoretically, that the proposal has exactly the same dictionary and sparse coefficient estimation performance as sparse dictionary learning for unencrypted signals. It can be directly carried out by using MOD and K-SVD algorithms. Moreover, we apply it to image modeling based on an image patch model. Finally, we demonstrate its excellent performance on synthetic data and natural images.},\n  keywords = {cloud computing;cryptography;data privacy;image representation;learning (artificial intelligence);singular value decomposition;edge cloud computing;sparse coding;K-SVD schemes;sparse coefficient estimation performance;sparse dictionary;secure dictionary learning;sparse representation;random unitary;unencrypted signals;K-SVD algorithm;image modeling;image patch model;synthetic data;natural images;Cryptography;Machine learning;Dictionaries;Signal processing algorithms;Encoding;Computational modeling;Image coding;Sparse Representation;Dictionary Learning;Random Unitary Transform;Secure Computation},\n  doi = {10.23919/EUSIPCO.2019.8902535},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534112.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose secure dictionary learning for sparse representation based on a random unitary transform. Edge cloud computing is now spreading to many application fields including services that use sparse coding. This situation raises many new privacy concerns. The proposed scheme provides practical MOD and K-SVD schemes that allow computation on encrypted signals. We prove, theoretically, that the proposal has exactly the same dictionary and sparse coefficient estimation performance as sparse dictionary learning for unencrypted signals. It can be directly carried out by using MOD and K-SVD algorithms. Moreover, we apply it to image modeling based on an image patch model. Finally, we demonstrate its excellent performance on synthetic data and natural images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n FDA-MIMO Signal Processing for Mainlobe Jammer Suppression.\n \n \n \n \n\n\n \n Wang, W. -.; So, H. C.; and Farina, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FDA-MIMOPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902536,\n  author = {W. -Q. Wang and H. C. So and A. Farina},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {FDA-MIMO Signal Processing for Mainlobe Jammer Suppression},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Inspired by the fact that frequency diverse array (FDA) generates a time-variant transmit beampattern depending on both range and angle parameters, we propose joint utilization of the FDA and multiple-input multiple-output (MIMO) radar for counteracting mainlobe jamming signals. In doing so, the advantages of FDA and MIMO in range-time-dependent beampattern and increased degrees-of-freedom, respectively, are combined. The proposed algorithm development is based on eigenvalue projection and blocking matrix processing. Theoretical analysis and simulation results for evaluation of the FDA-MIMO framework are provided.},\n  keywords = {array signal processing;eigenvalues and eigenfunctions;interference suppression;jamming;matrix algebra;MIMO radar;radar signal processing;eigenvalue projection;blocking matrix processing;FDA-MIMO framework;FDA-MIMO signal processing;mainlobe jammer suppression;frequency diverse array;time-variant transmit beampattern;multiple-input multiple-output radar;jamming signals;range-time-dependent beampattern;Jamming;MIMO radar;Eigenvalues and eigenfunctions;Array signal processing;Receivers;Signal processing algorithms;Frequency diverse array (FDA);FDA-MIMO radar;mainlobe jammer;jammer suppression;blocking matrix;eigenvalue projection},\n  doi = {10.23919/EUSIPCO.2019.8902536},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533285.pdf},\n}\n\n
\n
\n\n\n
\n Inspired by the fact that frequency diverse array (FDA) generates a time-variant transmit beampattern depending on both range and angle parameters, we propose joint utilization of the FDA and multiple-input multiple-output (MIMO) radar for counteracting mainlobe jamming signals. In doing so, the advantages of FDA and MIMO in range-time-dependent beampattern and increased degrees-of-freedom, respectively, are combined. The proposed algorithm development is based on eigenvalue projection and blocking matrix processing. Theoretical analysis and simulation results for evaluation of the FDA-MIMO framework are provided.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 2D Fourier Transform Based Analysis Comparing the DFA with the DMA.\n \n \n \n \n\n\n \n Berthelot, B.; Grivel, É.; Legrand, P.; Donias, M.; Andre, J. -.; Mazoyer, P.; and Ferreira, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"2DPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902538,\n  author = {B. Berthelot and É. Grivel and P. Legrand and M. Donias and J. -M. Andre and P. Mazoyer and T. Ferreira},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {2D Fourier Transform Based Analysis Comparing the DFA with the DMA},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Even if they can be outperformed by other methods, the detrended fluctuation analysis (DFA) and the detrended moving average (DMA) are widely used to estimate the Hurst exponent because they are based on basic notions of signal processing. For the last years, a great deal of interest has been paid to compare them and to better understand their behaviors from a mathematical point of view. In this paper, our contribution is the following: we first propose to express the square of the so-called fluctuation function as a 2D Fourier transform (2D-FT) of the product of two matrices. The first one is defined from the instantaneous correlations of the signal while the second, called the weighting matrix, is representative of each method. Therefore, the 2D-FT of the weighting matrix is analyzed in each case. In this study, differences between the DFA and the DMA are pointed out when the approaches are applied on non-stationary processes.},\n  keywords = {econophysics;fluctuations;Fourier transforms;fractals;matrix algebra;moving average processes;signal processing;time series;nonstationary processes;weighting matrix;instantaneous correlations;2D-FT;fluctuation function;signal processing;Hurst exponent;detrended moving average;detrended fluctuation analysis;DMA;DFA;2D Fourier transform;Market research;Signal processing;Symmetric matrices;Europe;Fourier transforms;Delays;Two dimensional displays;frequency analysis;Hurst;DFA;DMA},\n  doi = {10.23919/EUSIPCO.2019.8902538},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532025.pdf},\n}\n\n
\n
\n\n\n
\n Even if they can be outperformed by other methods, the detrended fluctuation analysis (DFA) and the detrended moving average (DMA) are widely used to estimate the Hurst exponent because they are based on basic notions of signal processing. For the last years, a great deal of interest has been paid to compare them and to better understand their behaviors from a mathematical point of view. In this paper, our contribution is the following: we first propose to express the square of the so-called fluctuation function as a 2D Fourier transform (2D-FT) of the product of two matrices. The first one is defined from the instantaneous correlations of the signal while the second, called the weighting matrix, is representative of each method. Therefore, the 2D-FT of the weighting matrix is analyzed in each case. In this study, differences between the DFA and the DMA are pointed out when the approaches are applied on non-stationary processes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Machine Learning Approach to the Identification of Dynamical Nonlinear Systems.\n \n \n \n \n\n\n \n Biagetti, G.; Crippa, P.; Falaschetti, L.; and Turchetti, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902539,\n  author = {G. Biagetti and P. Crippa and L. Falaschetti and C. Turchetti},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Machine Learning Approach to the Identification of Dynamical Nonlinear Systems},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The aim of this paper is to present a general machine learning approach to the identification of nonlinear systems, using the observed input-output finite datasets. The approach is derived representing the input and output signals in the feature space by the principal component analysis (PCA), thus transforming the nonlinear time dependent identification problem to the regression of a nonlinear input-output function. To face this problem an effective machine learning technique based on particle-Bernstein polynomials has been used to model the input-output relationship that describes the system. The approach has been validated by identifying two real world nonlinear systems, in the fields of speech signals and nonlinear audio amplifiers.},\n  keywords = {learning systems;nonlinear systems;polynomials;principal component analysis;dynamical nonlinear systems;machine learning;feature space;principal component analysis;nonlinear time dependent identification problem;nonlinear input-output function;particle-Bernstein polynomials;input-output relationship;nonlinear audio amplifiers;input-output finite datasets;Nonlinear systems;Training;Machine learning;Principal component analysis;Data models;Predictive models;Time-domain analysis;Machine learning;nonlinear systems;PCA;identification},\n  doi = {10.23919/EUSIPCO.2019.8902539},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532738.pdf},\n}\n\n
\n
\n\n\n
\n The aim of this paper is to present a general machine learning approach to the identification of nonlinear systems, using the observed input-output finite datasets. The approach is derived representing the input and output signals in the feature space by the principal component analysis (PCA), thus transforming the nonlinear time dependent identification problem to the regression of a nonlinear input-output function. To face this problem an effective machine learning technique based on particle-Bernstein polynomials has been used to model the input-output relationship that describes the system. The approach has been validated by identifying two real world nonlinear systems, in the fields of speech signals and nonlinear audio amplifiers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Beam-Steering in Switched 4D Arrays Based on the Discrete Walsh Transform.\n \n \n \n \n\n\n \n Maneiro-Catoira, R.; Avele, M. B. A.; Brégains, J.; García-Naya, J. A.; and Castedo, L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Beam-SteeringPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902540,\n  author = {R. Maneiro-Catoira and M. B. A. Avele and J. Brégains and J. A. García-Naya and L. Castedo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Beam-Steering in Switched 4D Arrays Based on the Discrete Walsh Transform},\n  year = {2019},\n  pages = {1-5},\n  abstract = {4D arrays provide cost-effective beam steering capabilities considering radio-frequency switches (controlled by periodic sequences) instead of variable phase shifters. Their synthesis is based on the Fourier coefficients (which depend on configurable time parameters) of the corresponding periodic modulating sequences. Whereas Fourier series use trigonometric functions to synthesize any periodic continuous waveform, Walsh series relies on bipolar orthogonal sequences and, if such a series is truncated, the expanded function is approximated by a stairstep signal. In this paper we present a novel method to synthesize 4D arrays by means of a finite set of Walsh functions. The technique allows for implementing the analog time-modulated feeding network of an array employing single-pole double-throw switches and achieves excellent rejection levels of the undesired harmonics.},\n  keywords = {antenna arrays;antenna feeds;beam steering;discrete transforms;Fourier series;microwave switches;phase shifters;signal processing;Walsh functions;beam-steering;switched 4D arrays;discrete Walsh transform;cost-effective beam steering capabilities;radio-frequency switches;periodic sequences;variable phase shifters;Fourier coefficients;configurable time parameters;corresponding periodic modulating sequences;Fourier series;trigonometric functions;periodic continuous waveform;Walsh series;bipolar orthogonal sequences;expanded function;stairstep signal;Walsh functions;analog time-modulated feeding network;single-pole double-throw switches;achieves excellent rejection levels;Harmonic analysis;Europe;Array signal processing;Discrete wavelet transforms;Phased arrays;4D arrays;time-modulated arrays;beam steering;Walsh functions},\n  doi = {10.23919/EUSIPCO.2019.8902540},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531257.pdf},\n}\n\n
\n
\n\n\n
\n 4D arrays provide cost-effective beam steering capabilities considering radio-frequency switches (controlled by periodic sequences) instead of variable phase shifters. Their synthesis is based on the Fourier coefficients (which depend on configurable time parameters) of the corresponding periodic modulating sequences. Whereas Fourier series use trigonometric functions to synthesize any periodic continuous waveform, Walsh series relies on bipolar orthogonal sequences and, if such a series is truncated, the expanded function is approximated by a stairstep signal. In this paper we present a novel method to synthesize 4D arrays by means of a finite set of Walsh functions. The technique allows for implementing the analog time-modulated feeding network of an array employing single-pole double-throw switches and achieves excellent rejection levels of the undesired harmonics.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fingerspelled Alphabet Sign Recognition in Upper-Body Videos.\n \n \n \n \n\n\n \n Papadimitriou, K.; and Potamianos, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FingerspelledPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902541,\n  author = {K. Papadimitriou and G. Potamianos},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Fingerspelled Alphabet Sign Recognition in Upper-Body Videos},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Fingerspelling is a crucial part of sign-based communication, however its recognition remains a challenging and mostly overlooked computer vision problem. To address it, this paper presents a system that recognizes the 24 static fingerspelled alphabet signs of the American Sign Language. The system consists of two algorithmic stages, comprising an efficient preprocessing phase that generates candidate hand-region proposals, followed by their deep-learning based classification. Specifically, the first stage exploits own earlier work on hand detection and segmentation in videos that also contain the signer's face, allowing face detection to drive skin-tone based hand segmentation, with motion further utilized to localize hands, extending it with a peak detection module that yields proposal regions likely to contain the signs of interest. These regions are then classified by a variant of a convolutional neural network that extends traditional convolutions to quadratic operations on the inputs, being, to our knowledge, the first application of such architecture to this task. Both system stages are evaluated on three well-known fingerspelling corpora, significantly outperforming a number of alternative approaches under both multi-signer and signer-independent experimental frameworks.},\n  keywords = {computer vision;convolutional neural nets;feature extraction;image classification;image colour analysis;image segmentation;learning (artificial intelligence);sign language recognition;fingerspelled alphabet Sign recognition;upper-body videos;fingerspelling;sign-based communication;24 static fingerspelled alphabet signs;American sign language;algorithmic stages;efficient preprocessing phase;candidate hand-region proposals;deep-learning based classification;hand detection;face detection;skin-tone based hand segmentation;peak detection module;proposal regions;convolutional neural network;system stages;overlooked computer vision problem;Assistive technology;Videos;Skin;Gesture recognition;Proposals;Motion segmentation;Image color analysis;Fingerspelling;ASL;CNN;detection;classification},\n  doi = {10.23919/EUSIPCO.2019.8902541},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533841.pdf},\n}\n\n
\n
\n\n\n
\n Fingerspelling is a crucial part of sign-based communication, however its recognition remains a challenging and mostly overlooked computer vision problem. To address it, this paper presents a system that recognizes the 24 static fingerspelled alphabet signs of the American Sign Language. The system consists of two algorithmic stages, comprising an efficient preprocessing phase that generates candidate hand-region proposals, followed by their deep-learning based classification. Specifically, the first stage exploits own earlier work on hand detection and segmentation in videos that also contain the signer's face, allowing face detection to drive skin-tone based hand segmentation, with motion further utilized to localize hands, extending it with a peak detection module that yields proposal regions likely to contain the signs of interest. These regions are then classified by a variant of a convolutional neural network that extends traditional convolutions to quadratic operations on the inputs, being, to our knowledge, the first application of such architecture to this task. Both system stages are evaluated on three well-known fingerspelling corpora, significantly outperforming a number of alternative approaches under both multi-signer and signer-independent experimental frameworks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On Room Impulse Response Measurement Using Orthogonal Periodic Sequences.\n \n \n \n \n\n\n \n Carini, A.; Orcioni, S.; and Cecchi, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902542,\n  author = {A. Carini and S. Orcioni and S. Cecchi},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {On Room Impulse Response Measurement Using Orthogonal Periodic Sequences},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The paper discusses a measurement method for the room impulse response (RIR) which is robust towards the nonlinearities affecting the power amplifier or the loudspeaker used in the measurement. In the proposed approach, the measurement system is modeled with a Volterra filter. The first order kernel of the Volterra filter, i.e., the linear part, is efficiently determined using orthogonal periodic sequences (OPSs) and the cross-correlation method. The approach shares many similarities with RIR measurements based on perfect periodic sequences (PPSs). In contrast to PPSs, the proposed approach is able to directly measure the impulse response for small signals of the measurement system. Moreover, the input signal can be any periodic persistently exciting sequence and can also be a quantized sequence. Measurements performed on an emulated scenario compare the proposed approach with other competing RIR measurement methods.},\n  keywords = {acoustic signal processing;architectural acoustics;correlation methods;loudspeakers;nonlinear filters;sequences;transient response;room impulse response measurement;orthogonal periodic sequences;measurement method;power amplifier;measurement system;Volterra filter;order kernel;cross-correlation method;approach shares many similarities;RIR measurements;perfect periodic sequences;PPS;periodic persistently exciting sequence;quantized sequence;competing RIR measurement methods;Loudspeakers;Acoustic measurements;Kernel;Power measurement;Linear systems;Acoustics;Estimation},\n  doi = {10.23919/EUSIPCO.2019.8902542},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532807.pdf},\n}\n\n
\n
\n\n\n
\n The paper discusses a measurement method for the room impulse response (RIR) which is robust towards the nonlinearities affecting the power amplifier or the loudspeaker used in the measurement. In the proposed approach, the measurement system is modeled with a Volterra filter. The first order kernel of the Volterra filter, i.e., the linear part, is efficiently determined using orthogonal periodic sequences (OPSs) and the cross-correlation method. The approach shares many similarities with RIR measurements based on perfect periodic sequences (PPSs). In contrast to PPSs, the proposed approach is able to directly measure the impulse response for small signals of the measurement system. Moreover, the input signal can be any periodic persistently exciting sequence and can also be a quantized sequence. Measurements performed on an emulated scenario compare the proposed approach with other competing RIR measurement methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Coverage Analysis for Backscatter Communication Empowered Cellular Internet-of-Things : Invited Paper.\n \n \n \n \n\n\n \n Zaidi, S. A. R.; Hafeez, M.; McLernon, D.; and Win, M. Z.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CoveragePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902543,\n  author = {S. A. R. Zaidi and M. Hafeez and D. McLernon and M. Z. Win},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Coverage Analysis for Backscatter Communication Empowered Cellular Internet-of-Things : Invited Paper},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this article, we develop a comprehensive framework to characterize the coverage probability of backscatter communication empowered cellular Internet-of-Things (IoT) sensor networks (SNs). The developed framework considers hierarchical cellular type deployment topology which is practically useful for various IoT applications. In contrast to existing studies, the framework is geared towards system level performance analysis. Our analysis explicitly considers the dyadic fading experienced by the links and spatial randomness of the network nodes. To ensure tractability of analysis, we develop novel closed-form bounds for quantifying the coverage probability of SNs. The developed framework is corroborated using Monte Carlo simulations. Lastly, we also demonstrate the impact of various underlying parameters and highlight the utility of the derived expressions for network dimensioning.},\n  keywords = {backscatter;cellular radio;Internet of Things;Monte Carlo methods;probability;telecommunication network topology;wireless sensor networks;coverage probability;backscatter communication;Internet-of-Things sensor networks;SNs;hierarchical cellular type deployment topology;IoT applications;system level performance analysis;network nodes;network dimensioning;cellular Internet-of-Things;Backscatter;Fading channels;Correlation;Hafnium;Internet of Things;Radio frequency;Topology;backscatter communication;dyadic fading;stochastic geometry;Poisson process;coverage probability},\n  doi = {10.23919/EUSIPCO.2019.8902543},\n  issn = {2076-1465},\n  url={https://www.eurasip.org/Proceedings/Eusipco/eusipco2019/Proceedings/papers/1570533362.pdf},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n In this article, we develop a comprehensive framework to characterize the coverage probability of backscatter communication empowered cellular Internet-of-Things (IoT) sensor networks (SNs). The developed framework considers hierarchical cellular type deployment topology which is practically useful for various IoT applications. In contrast to existing studies, the framework is geared towards system level performance analysis. Our analysis explicitly considers the dyadic fading experienced by the links and spatial randomness of the network nodes. To ensure tractability of analysis, we develop novel closed-form bounds for quantifying the coverage probability of SNs. The developed framework is corroborated using Monte Carlo simulations. Lastly, we also demonstrate the impact of various underlying parameters and highlight the utility of the derived expressions for network dimensioning.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Heterogeneous Network Localization with a Distributed Phased Array Composed of Cooperative Vehicles.\n \n \n \n \n\n\n \n Zhang, S.; Pöhlmann, R.; and Dammann, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"HeterogeneousPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902545,\n  author = {S. Zhang and R. Pöhlmann and A. Dammann},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Heterogeneous Network Localization with a Distributed Phased Array Composed of Cooperative Vehicles},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The emerging 5G technology provides direct data transmission, precise synchronization and localization for future autonomous vehicles. These vehicles are coupled in certain groups and drive simultaneously to enhance the road capacity and energy efficiency. The estimation of formation, i.e. the position of each individual vehicle w.r.t. the group, from the 5G signal is precise enough, so that the antennas deployed on multiple vehicles can be collectively considered as a distributed virtual antenna array. Non-cooperative entities, e.g. 5G incompatible vehicles, pedestrians or other road users, with radio transmissions according to a previous standard, can be precisely localized relative to the formation by distributed near-field array processing. By jointly locating the targets, the formation estimate of the 5G enabled vehicles is further improved.},\n  keywords = {5G mobile communication;antenna phased arrays;array signal processing;energy conservation;vehicular ad hoc networks;heterogeneous network localization;distributed phased array composed;emerging 5G technology;direct data transmission;precise synchronization;future autonomous vehicles;road capacity;energy efficiency;individual vehicle w.r.t;multiple vehicles;distributed virtual antenna array;5G incompatible vehicles;road users;radio transmissions;distributed near-field array processing;formation estimate;Clocks;5G mobile communication;Estimation;Delays;Synchronization;Bandwidth;Gold},\n  doi = {10.23919/EUSIPCO.2019.8902545},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533826.pdf},\n}\n\n
\n
\n\n\n
\n The emerging 5G technology provides direct data transmission, precise synchronization and localization for future autonomous vehicles. These vehicles are coupled in certain groups and drive simultaneously to enhance the road capacity and energy efficiency. The estimation of formation, i.e. the position of each individual vehicle w.r.t. the group, from the 5G signal is precise enough, so that the antennas deployed on multiple vehicles can be collectively considered as a distributed virtual antenna array. Non-cooperative entities, e.g. 5G incompatible vehicles, pedestrians or other road users, with radio transmissions according to a previous standard, can be precisely localized relative to the formation by distributed near-field array processing. By jointly locating the targets, the formation estimate of the 5G enabled vehicles is further improved.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n SOS Boosting for Image Deblurring Algorithms.\n \n \n \n \n\n\n \n Peled, S. R.; Romano, Y.; and Elad, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SOSPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902547,\n  author = {S. R. Peled and Y. Romano and M. Elad},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {SOS Boosting for Image Deblurring Algorithms},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The work reported by Romano and Elad presents the SOS boosting - a generic recursive method for improving image denoising algorithms. Appealing properties of the SOS scheme are its flexibility to work with any denoising method, its simplicity, and its robustness. In this article we aim to generalize this to the image deblurring task. The proposed SOS procedure for deblurring is given by the following iterative steps: [S]trengthen the signal by blurring the output of the previous iteration and adding it to the blurred input image, [O]perate the deblurring algorithm on the strengthened image, and [S]ubtract the previous output from the current one. We demonstrate this procedure for several state-of-the-art methods (BM3D, EPLL and NCSR), showing the potential gain in output quality for each. As in the original SOS, we manipulate the iterative algorithm by two parameters, better controlling its steady state and rate of convergence.},\n  keywords = {image denoising;image restoration;iterative methods;recursive estimation;SOS boosting;image deblurring algorithms;image denoising algorithms;SOS scheme;denoising method;image deblurring task;SOS procedure;iterative steps;blurred input image;deblurring algorithm;iterative algorithm;Boosting;Signal processing algorithms;Image restoration;Noise reduction;Image denoising;Convergence;Task analysis},\n  doi = {10.23919/EUSIPCO.2019.8902547},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570522718.pdf},\n}\n\n
\n
\n\n\n
\n The work reported by Romano and Elad presents the SOS boosting - a generic recursive method for improving image denoising algorithms. Appealing properties of the SOS scheme are its flexibility to work with any denoising method, its simplicity, and its robustness. In this article we aim to generalize this to the image deblurring task. The proposed SOS procedure for deblurring is given by the following iterative steps: [S]trengthen the signal by blurring the output of the previous iteration and adding it to the blurred input image, [O]perate the deblurring algorithm on the strengthened image, and [S]ubtract the previous output from the current one. We demonstrate this procedure for several state-of-the-art methods (BM3D, EPLL and NCSR), showing the potential gain in output quality for each. As in the original SOS, we manipulate the iterative algorithm by two parameters, better controlling its steady state and rate of convergence.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Parallel optimization Approach on the Infinity Norm Minimization Problem.\n \n \n \n \n\n\n \n Liu, T.; Hoang, M. T.; Yang, Y.; and Pesavento, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902548,\n  author = {T. Liu and M. T. Hoang and Y. Yang and M. Pesavento},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Parallel optimization Approach on the Infinity Norm Minimization Problem},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We consider the ℓ∞-norm minimization problem, which has been investigated in various practical applications. Based on an equivalent problem reformulation we propose an efficient algorithm that is suitable for implementation on parallel hardware architectures. Simulation results show that when applied to the peak-to-average ratio reduction problem, the algorithm achieves the solution obtained by the primal-dual hybrid gradient approach with proximal operator while significantly reducing the required running time for convergence.},\n  keywords = {gradient methods;minimisation;parallel hardware architectures;peak-to-average ratio reduction problem;primal-dual hybrid gradient approach;parallel optimization approach;infinity norm minimization problem;equivalent problem reformulation;ℓ∞-norm minimization problem;proximal operator;Signal processing algorithms;Approximation algorithms;Minimization;Peak to average power ratio;Linear programming;Europe;Signal processing;l∞-norm minimization;peak-to-average power ratio (PAPR) reduction;successive convex approximation;parallel optimization},\n  doi = {10.23919/EUSIPCO.2019.8902548},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529692.pdf},\n}\n\n
\n
\n\n\n
\n We consider the ℓ∞-norm minimization problem, which has been investigated in various practical applications. Based on an equivalent problem reformulation we propose an efficient algorithm that is suitable for implementation on parallel hardware architectures. Simulation results show that when applied to the peak-to-average ratio reduction problem, the algorithm achieves the solution obtained by the primal-dual hybrid gradient approach with proximal operator while significantly reducing the required running time for convergence.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint Singing Voice Separation and F0 Estimation with Deep U-Net Architectures.\n \n \n \n \n\n\n \n Jansson, A.; Bittner, R. M.; Ewert, S.; and Weyde, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902550,\n  author = {A. Jansson and R. M. Bittner and S. Ewert and T. Weyde},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint Singing Voice Separation and F0 Estimation with Deep U-Net Architectures},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Vocal source separation and fundamental frequency estimation in music are tightly related tasks. The outputs of vocal source separation systems have previously been used as inputs to vocal fundamental frequency estimation systems; conversely, vocal fundamental frequency has been used as side information to improve vocal source separation. In this paper, we propose several different approaches for jointly separating vocals and estimating fundamental frequency. We show that joint learning is advantageous for these tasks, and that a stacked architecture which first performs vocal separation outperforms the other configurations considered. Furthermore, the best joint model achieves state-of-the-art results for vocal-fo estimation on the iKala dataset. Finally, we highlight the importance of performing polyphonic, rather than monophonic vocal-fo estimation for many real-world cases.},\n  keywords = {audio signal processing;frequency estimation;learning (artificial intelligence);music;source separation;speech processing;stacked architecture which first performs vocal separation;joint learning;estimating fundamental frequency;vocal fundamental frequency estimation systems;vocal source separation systems;Abstract-Vocal source separation;deep u-net architectures;F0 Estimation;voice separation;Source separation;Task analysis;Estimation;Computational modeling;Frequency estimation;Training;music;voice;singing;fundamental frequency estimation;pitch;melody;source separation;multitask learning},\n  doi = {10.23919/EUSIPCO.2019.8902550},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532918.pdf},\n}\n\n
\n
\n\n\n
\n Vocal source separation and fundamental frequency estimation in music are tightly related tasks. The outputs of vocal source separation systems have previously been used as inputs to vocal fundamental frequency estimation systems; conversely, vocal fundamental frequency has been used as side information to improve vocal source separation. In this paper, we propose several different approaches for jointly separating vocals and estimating fundamental frequency. We show that joint learning is advantageous for these tasks, and that a stacked architecture which first performs vocal separation outperforms the other configurations considered. Furthermore, the best joint model achieves state-of-the-art results for vocal-fo estimation on the iKala dataset. Finally, we highlight the importance of performing polyphonic, rather than monophonic vocal-fo estimation for many real-world cases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Least Squares Narrowband DOA Estimator with Robustness Against Phase Wrapping.\n \n \n \n \n\n\n \n Kabzinski, T.; and Habets, E. A. P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902551,\n  author = {T. Kabzinski and E. A. P. Habets},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Least Squares Narrowband DOA Estimator with Robustness Against Phase Wrapping},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Narrowband direction-of-arrival (DOA) estimates are commonly used for source localization, parametric spatial audio coding, and directional filtering. As previously shown, a linear least squares direction estimate can be obtained by minimizing the difference of expected and observed inter-microphone phase differences. In this work, it is shown that phase wrapping induces severe estimation errors especially at frequencies just below spatial aliasing frequencies and in low signal-to-noise ratios. A cost function to mitigate the influence of phase wrapping errors on the DOA estimation is proposed. Even though the proposed cost function is nonlinear, it is shown that one iteration of a gradient descent method with proper initialization provides a large improvement when compared to the linear least squares solution.},\n  keywords = {audio coding;direction-of-arrival estimation;filtering theory;least squares approximations;source separation;linear least squares solution;least squares narrowband DOA estimator;narrowband direction-of-arrival;source localization;spatial audio coding;directional filtering;linear least squares direction estimate;spatial aliasing;signal-to-noise ratios;phase wrapping errors;DOA estimation;intermicrophone phase differences;Microphones;Direction-of-arrival estimation;Wrapping;Cost function;Estimation;Signal to noise ratio;Azimuth;Direction-of-arrival estimation;narrowband;microphone arrays;phase wrapping;source localization},\n  doi = {10.23919/EUSIPCO.2019.8902551},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533287.pdf},\n}\n\n
\n
\n\n\n
\n Narrowband direction-of-arrival (DOA) estimates are commonly used for source localization, parametric spatial audio coding, and directional filtering. As previously shown, a linear least squares direction estimate can be obtained by minimizing the difference of expected and observed inter-microphone phase differences. In this work, it is shown that phase wrapping induces severe estimation errors especially at frequencies just below spatial aliasing frequencies and in low signal-to-noise ratios. A cost function to mitigate the influence of phase wrapping errors on the DOA estimation is proposed. Even though the proposed cost function is nonlinear, it is shown that one iteration of a gradient descent method with proper initialization provides a large improvement when compared to the linear least squares solution.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed Consensus-Based Extended Kalman Filtering: A Bayesian Perspective.\n \n \n \n \n\n\n \n Wang, S.; and Dekorsy, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DistributedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902553,\n  author = {S. Wang and A. Dekorsy},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Distributed Consensus-Based Extended Kalman Filtering: A Bayesian Perspective},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we study the distributed state estimation problem where a set of nodes cooperatively estimate the hidden state of a nonlinear dynamic system based on sequential observations. As a common approach to solve this problem, the extended Kalman filter (EKF) is considered from a Bayesian perspective. After linearizing the state-space model using the first-order Taylor series, we construct an equivalent maximum-a-posteriori (MAP) estimation problem under linear Gaussian assumptions coupled with a consensus constraint. The consensus-based MAP problem is solved distributedly by the alternating direction method of multiplier (ADMM). The resulting distributed algorithm ensures robust consensus-based state estimates among nodes and is able to converge to the central solution.},\n  keywords = {Bayes methods;distributed algorithms;Kalman filters;nonlinear dynamical systems;nonlinear filters;state estimation;state-space methods;distributed consensus;Kalman filtering;Bayesian perspective;distributed state estimation problem;hidden state;nonlinear dynamic system;sequential observations;extended Kalman filter;state-space model;first-order Taylor series;equivalent maximum-a-posteriori estimation problem;consensus constraint;MAP problem;distributed algorithm;robust consensus-based state;Covariance matrices;Kalman filters;Estimation;Bayes methods;Taylor series;Mathematical model;Optimization;Nonlinear state estimation;distributed extended Kalman filter;maximum-a-posteriori estimation;consensus optimization},\n  doi = {10.23919/EUSIPCO.2019.8902553},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530172.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we study the distributed state estimation problem where a set of nodes cooperatively estimate the hidden state of a nonlinear dynamic system based on sequential observations. As a common approach to solve this problem, the extended Kalman filter (EKF) is considered from a Bayesian perspective. After linearizing the state-space model using the first-order Taylor series, we construct an equivalent maximum-a-posteriori (MAP) estimation problem under linear Gaussian assumptions coupled with a consensus constraint. The consensus-based MAP problem is solved distributedly by the alternating direction method of multiplier (ADMM). The resulting distributed algorithm ensures robust consensus-based state estimates among nodes and is able to converge to the central solution.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Two-Band Signal Reconstruction from Periodic Non-uniform Samples.\n \n \n \n \n\n\n \n Guo, L. P.; Kok, C. W.; So, H. C.; and Tam, W. S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Two-BandPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902554,\n  author = {L. P. Guo and C. W. Kok and H. C. So and W. S. Tam},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Two-Band Signal Reconstruction from Periodic Non-uniform Samples},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The Shannon sampling theorem states the lowest sampling rate for the lowpass bandlimited signals. But for multiband bandlimited signals, it is inefficient to apply the Shannon sampling theorem. This is because the existence of gaps between successive bands makes it possible to realize sampling at a rate, which is lower than the Nyquist rate and lower-bounded by Nyquist-Landau rate. The Nyquist-Landau rate for multiband signals, can be attained via periodic nonuniform sampling. However, it is still very challenging to find the sampling rate for multiband bandlimited signals such that the average sampling rate approaches the Nyquist-Landau rate. In this paper, we aim to find the feasible range of sub-Nyquist sampling rate (such that uniform sampling at this rate causes no aliasing) for two-band signals without aliasing. In this paper, an efficient method to find the constraints on the sampling frequency of two-band signals is devised. The normal placement and inverse placement of the spectrum are considered. Guard bands are considered to increase the robustness of the proposed sampling scheme. Analytical study is provided to obtain the allowable region of sampling frequencies. The derived low sampling rate ensures a relaxed requirement in terms of sampling, processing and memory.},\n  keywords = {bandlimited signals;signal reconstruction;signal sampling;band signal reconstruction;periodic nonuniform samples;Shannonsampling theorem;lowest sampling rate;lowpass bandlimited signals;multiband bandlimited signals;successive bands;Nyquist rate;Nyquist-Landau rate;multiband signals;periodic nonuniform sampling;average sampling rate;sub-Nyquist sampling rate;uniform sampling;two-band signals;sampling frequency;sampling scheme;low sampling rate;Nonuniform sampling;Bandwidth;Dual band;Time-frequency analysis;Robustness;Signal reconstruction;Urban areas;two-band;reconstruction;periodic nonuniform sampling;sampling frequency rang},\n  doi = {10.23919/EUSIPCO.2019.8902554},\n  issn = {2076-1465},\n  url={https://www.eurasip.org/Proceedings/Eusipco/eusipco2019/Proceedings/papers/1570531860.pdf},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The Shannon sampling theorem states the lowest sampling rate for the lowpass bandlimited signals. But for multiband bandlimited signals, it is inefficient to apply the Shannon sampling theorem. This is because the existence of gaps between successive bands makes it possible to realize sampling at a rate, which is lower than the Nyquist rate and lower-bounded by Nyquist-Landau rate. The Nyquist-Landau rate for multiband signals, can be attained via periodic nonuniform sampling. However, it is still very challenging to find the sampling rate for multiband bandlimited signals such that the average sampling rate approaches the Nyquist-Landau rate. In this paper, we aim to find the feasible range of sub-Nyquist sampling rate (such that uniform sampling at this rate causes no aliasing) for two-band signals without aliasing. In this paper, an efficient method to find the constraints on the sampling frequency of two-band signals is devised. The normal placement and inverse placement of the spectrum are considered. Guard bands are considered to increase the robustness of the proposed sampling scheme. Analytical study is provided to obtain the allowable region of sampling frequencies. The derived low sampling rate ensures a relaxed requirement in terms of sampling, processing and memory.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n optimized Reference Picture Selection for Light Field Image Coding.\n \n \n \n \n\n\n \n Ricardo Monteiro, J. S.; Nuno Rodrigues, M. M.; Sérgio Faria, M. M.; and Paulo Nunes, J. L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"optimizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902555,\n  author = {J. S. {Ricardo Monteiro} and M. M. {Nuno Rodrigues} and M. M. {Sérgio Faria} and J. L. {Paulo Nunes}},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {optimized Reference Picture Selection for Light Field Image Coding},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper proposes a new reference picture selection method for light field image coding using the pseudo-video sequence (PVS) format. State-of-the-art solutions to encode light field images using the PVS format rely on video coding standards to exploit the inter-view redundancy between each sub-aperture image (SAI) that composes the light field. However, the PVS scanning order is not usually considered by the video codec. The proposed solution signals the PVS scanning order to the decoder, enabling implicit optimized reference picture selection for each specific scanning order. With the proposed method each reference picture is selected by minimizing the Euclidean distance to the current SAI being encoded. Experimental results show that, for the same PVS scanning order, the proposed optimized reference picture selection codec outperforms HEVC video coding standard for light field image coding, up to 50% in terms of bitrate savings.},\n  keywords = {image sequences;video codecs;video coding;light field image coding;pseudovideo sequence format;PVS format;video coding standards;sub-aperture image;PVS scanning order;optimized reference picture selection codec;HEVC video coding standard;Decoding;Encoding;Image coding;Codecs;Redundancy;Spirals;Euclidean distance;Light Field Image Coding;Pseudo-video sequence;optimized reference picture selection;HEVC},\n  doi = {10.23919/EUSIPCO.2019.8902555},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533813.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a new reference picture selection method for light field image coding using the pseudo-video sequence (PVS) format. State-of-the-art solutions to encode light field images using the PVS format rely on video coding standards to exploit the inter-view redundancy between each sub-aperture image (SAI) that composes the light field. However, the PVS scanning order is not usually considered by the video codec. The proposed solution signals the PVS scanning order to the decoder, enabling implicit optimized reference picture selection for each specific scanning order. With the proposed method each reference picture is selected by minimizing the Euclidean distance to the current SAI being encoded. Experimental results show that, for the same PVS scanning order, the proposed optimized reference picture selection codec outperforms HEVC video coding standard for light field image coding, up to 50% in terms of bitrate savings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Face-aware Saliency Estimation Model for 360° Images.\n \n \n \n \n\n\n \n Mazumdar, P.; Arru, G.; Carli, M.; and Battisti, F.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Face-awarePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902556,\n  author = {P. Mazumdar and G. Arru and M. Carli and F. Battisti},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Face-aware Saliency Estimation Model for 360° Images},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, a saliency estimation technique for omni-directional images is presented. Traditional approaches for estimating 360° image saliency rely on the exploitation of low and high-level image features, along with auxiliary data, such as head movement or eye-gazes. However, the image content plays an important role in saliency estimation. Based on this evidence, in the proposed method low-level features are combined with the detection of human faces. In this way it is possible to refine the saliency estimation based on the low-level features by assigning a larger weight to the regions containing faces. Experimental results on 360° image dataset show the effectiveness of the proposed approach.},\n  keywords = {face recognition;feature extraction;object detection;saliency estimation technique;omni-directional images;360° image saliency;high-level image features;image content;method low-level features;human faces;360° image dataset;face-aware saliency estimation model;Faces;Estimation;Two dimensional displays;Feature extraction;Visualization;Entropy;Omni-directional images;saliency;face detection;low-level features},\n  doi = {10.23919/EUSIPCO.2019.8902556},\n  issn = {2076-1465},\n  url={https://www.eurasip.org/Proceedings/Eusipco/eusipco2019/Proceedings/papers/1570534143.pdf},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n In this paper, a saliency estimation technique for omni-directional images is presented. Traditional approaches for estimating 360° image saliency rely on the exploitation of low and high-level image features, along with auxiliary data, such as head movement or eye-gazes. However, the image content plays an important role in saliency estimation. Based on this evidence, in the proposed method low-level features are combined with the detection of human faces. In this way it is possible to refine the saliency estimation based on the low-level features by assigning a larger weight to the regions containing faces. Experimental results on 360° image dataset show the effectiveness of the proposed approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast Multichannel Source Separation Based on Jointly Diagonalizable Spatial Covariance Matrices.\n \n \n \n \n\n\n \n Sekiguchi, K.; Nugraha, A. A.; Bando, Y.; and Yoshii, K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902557,\n  author = {K. Sekiguchi and A. A. Nugraha and Y. Bando and K. Yoshii},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast Multichannel Source Separation Based on Jointly Diagonalizable Spatial Covariance Matrices},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper describes a versatile method that accelerates multichannel source separation methods based on full-rank spatial modeling. A popular approach to multichannel source separation is to integrate a spatial model with a source model for estimating the spatial covariance matrices (SCMs) and power spectral densities (PSDs) of each sound source in the time-frequency domain. One of the most successful examples of this approach is multichannel nonnegative matrix factorization (MNMF) based on a full-rank spatial model and a low-rank source model. MNMF, however, is computationally expensive and often works poorly due to the difficulty of estimating the unconstrained full-rank SCMs. Instead of restricting the SCMs to rank -1 matrices with the severe loss of the spatial modeling ability as in independent low-rank matrix analysis (ILRMA), we restrict the SCMs of each frequency bin to jointly-diagonalizable but still full-rank matrices. For such a fast version of MNMF, we propose a computationally-efficient and convergence-guaranteed algorithm that is similar in form to that of ILRMA. Similarly, we propose a fast version of a state of-the-art speech enhancement method based on a deep speech model and a low-rank noise model. Experimental results showed that the fast versions of MNMF and the deep speech enhancement method were several times faster and performed even better than the original versions of those methods, respectively.},\n  keywords = {audio signal processing;blind source separation;covariance matrices;matrix decomposition;source separation;speech enhancement;MNMF;deep speech enhancement method;fast multichannel source separation;jointly diagonalizable spatial covariance matrices;multichannel source separation methods;full-rank spatial modeling;sound source;multichannel nonnegative matrix factorization;low-rank source model;full-rank SCMs;spatial modeling ability;low-rank matrix analysis;full-rank matrices;deep speech model;low-rank noise model;Speech enhancement;Computational modeling;Analytical models;Covariance matrices;Source separation;Signal processing algorithms;Data models;Multichannel source separation;speech enhancement;spatial modeling;joint diagonalization},\n  doi = {10.23919/EUSIPCO.2019.8902557},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533283.pdf},\n}\n\n
\n
\n\n\n
\n This paper describes a versatile method that accelerates multichannel source separation methods based on full-rank spatial modeling. A popular approach to multichannel source separation is to integrate a spatial model with a source model for estimating the spatial covariance matrices (SCMs) and power spectral densities (PSDs) of each sound source in the time-frequency domain. One of the most successful examples of this approach is multichannel nonnegative matrix factorization (MNMF) based on a full-rank spatial model and a low-rank source model. MNMF, however, is computationally expensive and often works poorly due to the difficulty of estimating the unconstrained full-rank SCMs. Instead of restricting the SCMs to rank -1 matrices with the severe loss of the spatial modeling ability as in independent low-rank matrix analysis (ILRMA), we restrict the SCMs of each frequency bin to jointly-diagonalizable but still full-rank matrices. For such a fast version of MNMF, we propose a computationally-efficient and convergence-guaranteed algorithm that is similar in form to that of ILRMA. Similarly, we propose a fast version of a state of-the-art speech enhancement method based on a deep speech model and a low-rank noise model. Experimental results showed that the fast versions of MNMF and the deep speech enhancement method were several times faster and performed even better than the original versions of those methods, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Graph Spectral Domain Features for Static Hand Gesture Recognition.\n \n \n \n \n\n\n \n Alwaely, B.; and Abhayaratne, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"GraphPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902558,\n  author = {B. Alwaely and C. Abhayaratne},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Graph Spectral Domain Features for Static Hand Gesture Recognition},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The graph spectral processing is gaining increasing interest in the computer vision society because of its ability to characterize the shape. However, the graph spectral methods are usually high computational cost and one solution to simplify the problem is to automatically divide the graph into several sub-graphs. Therefore, we utilize a graph spectral domain feature representation based on the shape silhouette and we introduce a fully automatic divisive hierarchical clustering method based on the shape skeleton for static hand gesture recognition. In particular, we establish the ability of the Fiedler vector for partitioning 3D shapes. Several rules are applied to achieve a stable graph segmentation. The generated sub-graphs are used for matching purposes. Supporting results based on several datasets demonstrate the performance of the proposed method compared to the state-of-the-art methods by increment 0.3% and 3.8% for two datasets.},\n  keywords = {computer vision;feature extraction;gesture recognition;graph theory;image representation;image segmentation;pattern clustering;computer vision society;computational cost;graph spectral domain feature representation;shape silhouette;fully automatic divisive hierarchical clustering method;shape skeleton;static hand gesture recognition;partitioning 3D shapes;stable graph segmentation;graph spectral processing;Shape;Skeleton;Spectral analysis;Gesture recognition;Three-dimensional displays;Feature extraction;Europe;Hand gesture recognition;Graph spectral features;Graph partitioning;Fiedler vector;Shape matching.},\n  doi = {10.23919/EUSIPCO.2019.8902558},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533631.pdf},\n}\n\n
\n
\n\n\n
\n The graph spectral processing is gaining increasing interest in the computer vision society because of its ability to characterize the shape. However, the graph spectral methods are usually high computational cost and one solution to simplify the problem is to automatically divide the graph into several sub-graphs. Therefore, we utilize a graph spectral domain feature representation based on the shape silhouette and we introduce a fully automatic divisive hierarchical clustering method based on the shape skeleton for static hand gesture recognition. In particular, we establish the ability of the Fiedler vector for partitioning 3D shapes. Several rules are applied to achieve a stable graph segmentation. The generated sub-graphs are used for matching purposes. Supporting results based on several datasets demonstrate the performance of the proposed method compared to the state-of-the-art methods by increment 0.3% and 3.8% for two datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Algorithms for Piecewise Constant Signal Approximations.\n \n \n \n \n\n\n \n Bergerhoff, L.; Weickert, J.; and Dar, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AlgorithmsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902559,\n  author = {L. Bergerhoff and J. Weickert and Y. Dar},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Algorithms for Piecewise Constant Signal Approximations},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We consider the problem of finding optimal piecewise constant approximations of one-dimensional signals. These approximations should consist of a specified number of segments (samples) and minimise the mean squared error to the original signal. We formalise this goal as a discrete nonconvex optimisation problem, for which we study two algorithms. First we reformulate a recent adaptive sampling method by Dar and Bruckstein in a compact and transparent way. This allows us to analyse its limitations when it comes to violations of its three key assumptions: signal smoothness, local linearity, and error balancing. As a remedy, we propose a direct optimisation approach which does not rely on any of these assumptions and employs a particle swarm optimisation algorithm. Our experiments show that for nonsmooth signals or low sample numbers, the direct optimisation approach offers substantial qualitative advantages over the Dar-Bruckstein method. As a more general contribution, we disprove the optimality of the principle of error balancing for optimising data in the ℓ2 norm.},\n  keywords = {concave programming;mean square error methods;particle swarm optimisation;piecewise constant techniques;signal sampling;piecewise constant signal approximations;one-dimensional signals;mean squared error;discrete nonconvex optimisation problem;signal smoothness;error balancing;direct optimisation approach;particle swarm optimisation algorithm;nonsmooth signals;Dar-Bruckstein method;adaptive sampling method;local linearity;Optimization;Linearity;Signal processing algorithms;Europe;Signal processing;Approximation algorithms;Particle swarm optimization;Adaptive Signal Processing;Nonuniform Sampling;Nonconvex optimisation;Particle Swarm optimisation;Segmentation},\n  doi = {10.23919/EUSIPCO.2019.8902559},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529555.pdf},\n}\n\n
\n
\n\n\n
\n We consider the problem of finding optimal piecewise constant approximations of one-dimensional signals. These approximations should consist of a specified number of segments (samples) and minimise the mean squared error to the original signal. We formalise this goal as a discrete nonconvex optimisation problem, for which we study two algorithms. First we reformulate a recent adaptive sampling method by Dar and Bruckstein in a compact and transparent way. This allows us to analyse its limitations when it comes to violations of its three key assumptions: signal smoothness, local linearity, and error balancing. As a remedy, we propose a direct optimisation approach which does not rely on any of these assumptions and employs a particle swarm optimisation algorithm. Our experiments show that for nonsmooth signals or low sample numbers, the direct optimisation approach offers substantial qualitative advantages over the Dar-Bruckstein method. As a more general contribution, we disprove the optimality of the principle of error balancing for optimising data in the ℓ2 norm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Introducing SPAIN (SParse Audio INpainter).\n \n \n \n \n\n\n \n Mokrý, O.; Záviška, P.; Rajmic, P.; and Veselý, V.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"IntroducingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902560,\n  author = {O. Mokrý and P. Záviška and P. Rajmic and V. Veselý},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Introducing SPAIN (SParse Audio INpainter)},\n  year = {2019},\n  pages = {1-5},\n  abstract = {A novel sparsity-based algorithm for audio inpainting is proposed. It is an adaptation of the SPADE algorithm by Kitić et al., originally developed for audio declipping, to the task of audio inpainting. The new SPAIN (SParse Audio INpainter) comes in synthesis and analysis variants. Experiments show that both A-SPAIN and S-SPAIN outperform other sparsity-based inpainting algorithms. Moreover, A-SPAIN performs on a par with the state-of-the-art method based on linear prediction in terms of the SNR, and, for larger gaps, SPAIN is even slightly better in terms of the PEMO-Q psychoacoustic criterion.},\n  keywords = {audio signal processing;PEMO-Q psychoacoustic criterion;SNR;sparsity-based inpainting algorithms;sparse audio inpainter;S-SPAIN;A-SPAIN;audio declipping;SPADE algorithm;Task analysis;Signal processing algorithms;Approximation algorithms;Signal to noise ratio;Reliability;Time-domain analysis;Inpainting;Sparse;Cosparse;Synthesis;Analysis},\n  doi = {10.23919/EUSIPCO.2019.8902560},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533596.pdf},\n}\n\n
\n
\n\n\n
\n A novel sparsity-based algorithm for audio inpainting is proposed. It is an adaptation of the SPADE algorithm by Kitić et al., originally developed for audio declipping, to the task of audio inpainting. The new SPAIN (SParse Audio INpainter) comes in synthesis and analysis variants. Experiments show that both A-SPAIN and S-SPAIN outperform other sparsity-based inpainting algorithms. Moreover, A-SPAIN performs on a par with the state-of-the-art method based on linear prediction in terms of the SNR, and, for larger gaps, SPAIN is even slightly better in terms of the PEMO-Q psychoacoustic criterion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An ensemble learning approach for the classification of remote sensing scenes based on covariance pooling of CNN features.\n \n \n \n \n\n\n \n Akodad, S.; Vilfroy, S.; Bombrun, L.; Cavalcante, C. C.; Germain, C.; and Berthoumieu, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902561,\n  author = {S. Akodad and S. Vilfroy and L. Bombrun and C. C. Cavalcante and C. Germain and Y. Berthoumieu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {An ensemble learning approach for the classification of remote sensing scenes based on covariance pooling of CNN features},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper aims at presenting a novel ensemble learning approach based on the concept of covariance pooling of CNN features issued from a pretrained model. Starting from a supervised classification algorithm, named multilayer stacked covariance pooling (MSCP), which exploits simultaneously second order statistics and deep learning features, we propose an alternative strategy which employs an ensemble learning approach among the stacked convolutional feature maps. The aggregation of multiple learning algorithm decisions, produced by different stacked subsets, permits to obtain a better predictive classification performance. An application for the classification of large scale remote sensing images is next proposed. The experimental results, conducted on two challenging datasets, namely UC Merced and AID datasets, improve the classification accuracy while maintaining a low computation time. This confirms, besides the interest of exploiting second order statistics, the benefit of adopting an ensemble learning approach.},\n  keywords = {convolutional neural nets;feature extraction;geophysical image processing;higher order statistics;image classification;image representation;learning (artificial intelligence);remote sensing;multilayer stacked covariance pooling;second order statistics;deep learning features;ensemble learning approach;stacked convolutional feature maps;multiple learning algorithm decisions;predictive classification performance;classification accuracy;remote sensing scenes;CNN features;supervised classification algorithm;large scale remote sensing images;low computation time;AID datasets;UC Merced datasets;Covariance matrices;Convolution;Encoding;Nonhomogeneous media;Remote sensing;Signal processing algorithms;Computational modeling;Covariance pooling;pretrained CNN models;multilayer feature maps;ensemble learning approach;remote sensing scene classification},\n  doi = {10.23919/EUSIPCO.2019.8902561},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532724.pdf},\n}\n\n
\n
\n\n\n
\n This paper aims at presenting a novel ensemble learning approach based on the concept of covariance pooling of CNN features issued from a pretrained model. Starting from a supervised classification algorithm, named multilayer stacked covariance pooling (MSCP), which exploits simultaneously second order statistics and deep learning features, we propose an alternative strategy which employs an ensemble learning approach among the stacked convolutional feature maps. The aggregation of multiple learning algorithm decisions, produced by different stacked subsets, permits to obtain a better predictive classification performance. An application for the classification of large scale remote sensing images is next proposed. The experimental results, conducted on two challenging datasets, namely UC Merced and AID datasets, improve the classification accuracy while maintaining a low computation time. This confirms, besides the interest of exploiting second order statistics, the benefit of adopting an ensemble learning approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detecting Early Parkinson’s Disease from Keystroke Dynamics using the Tensor-Train Decomposition.\n \n \n \n \n\n\n \n Hooman, O. M. J.; Oldfield, J.; and Nicolaou, M. A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DetectingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902562,\n  author = {O. M. J. Hooman and J. Oldfield and M. A. Nicolaou},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Detecting Early Parkinson’s Disease from Keystroke Dynamics using the Tensor-Train Decomposition},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We present a method for detecting early signs of Parkinson's disease from keystroke hold times that is based on the Tensor-Train (TT) decomposition. While simple uni-variate methods such as logistic regression have shown good performance on the given problem by using appropriate features, the TT format facilitates modelling high-order interactions by representing the exponentially large parameter tensor in a compact multi-linear form. By performing time-series feature extraction based on scalable hypothesis testing, we show that the proposed approach can significantly improve upon state-of-the-art for the given problem, reaching a performance of AUC=0.88, outperforming compared methods such as deep neural networks on the problem of detecting early Parkinson's disease from keystroke dynamics.},\n  keywords = {diseases;feature extraction;medical computing;medical disorders;neural nets;tensors;time series;time-series feature extraction;Parkinson's disease;keystroke dynamics;tensor-train decomposition;TT;uni-variate methods;logistic regression;high-order interactions;parameter tensor;hypothesis testing;deep neural networks;Tensors;Feature extraction;Parkinson's disease;Logistics;Data models;Recurrent neural networks;Signal processing;Tensor Decomposition;Tensor Train;Feature Extraction;Parkinson’s Disease},\n  doi = {10.23919/EUSIPCO.2019.8902562},\n  issn = {2076-1465},\n  url={https://www.eurasip.org/Proceedings/Eusipco/eusipco2019/Proceedings/papers/1570533569.pdf},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n We present a method for detecting early signs of Parkinson's disease from keystroke hold times that is based on the Tensor-Train (TT) decomposition. While simple uni-variate methods such as logistic regression have shown good performance on the given problem by using appropriate features, the TT format facilitates modelling high-order interactions by representing the exponentially large parameter tensor in a compact multi-linear form. By performing time-series feature extraction based on scalable hypothesis testing, we show that the proposed approach can significantly improve upon state-of-the-art for the given problem, reaching a performance of AUC=0.88, outperforming compared methods such as deep neural networks on the problem of detecting early Parkinson's disease from keystroke dynamics.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Highly Reliable Wrist-Worn Acceleration-Based Fall Detector.\n \n \n \n \n\n\n \n SALEH, M.; GEORGI, N.; ABBAS, M.; and JEANNÈS, R. L. B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902563,\n  author = {M. SALEH and N. GEORGI and M. ABBAS and R. L. B. JEANNÈS},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Highly Reliable Wrist-Worn Acceleration-Based Fall Detector},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Automatic fall detection for the elderly is one of the most important health-care applications since it enables a rapid medical intervention preventing serious consequences of falls. Wrist-worn fall detectors represent one of the most convenient solutions. However, power consumption has a notable impact on the acceptability of such devices since it affects the size and weight of the required battery and the rate of replacing/recharging it. In this paper, an acceleration-based fall detection system is proposed for wrist-worn devices. It consists of two stages. The first one is a highly-sensitive low computational complexity algorithm to be embedded in the wearable device. When a potential fall is detected, raw data are transmitted to a remote server for accurate analysis in order to reduce the number of false alarms. The second stage algorithm is based on machine learning and applied to highly discriminant features. The latter are selected using powerful feature selection algorithms where the input is 12000 features extracted from each entry of a large activity dataset. The proposed system achieved an accuracy of 100% when evaluated on a 2400-file dataset. Moreover, the feasibility of the proposed system has been validated in real world conditions where it has been realized and tested using a smart watch and a server.},\n  keywords = {accelerometers;biomedical equipment;body sensor networks;feature extraction;geriatrics;learning (artificial intelligence);medical computing;patient monitoring;wearable computers;wrist-worn fall detectors;power consumption;acceleration-based fall detection system;wrist-worn devices;wearable device;feature selection algorithms;automatic fall detection;health-care applications;wrist-worn acceleration;smart watch;Feature extraction;Acceleration;Detectors;Machine learning algorithms;Servers;Accelerometers;fall detection;machine learning;elderly health-care;wearable sensors;feature selection},\n  doi = {10.23919/EUSIPCO.2019.8902563},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533878.pdf},\n}\n\n
\n
\n\n\n
\n Automatic fall detection for the elderly is one of the most important health-care applications since it enables a rapid medical intervention preventing serious consequences of falls. Wrist-worn fall detectors represent one of the most convenient solutions. However, power consumption has a notable impact on the acceptability of such devices since it affects the size and weight of the required battery and the rate of replacing/recharging it. In this paper, an acceleration-based fall detection system is proposed for wrist-worn devices. It consists of two stages. The first one is a highly-sensitive low computational complexity algorithm to be embedded in the wearable device. When a potential fall is detected, raw data are transmitted to a remote server for accurate analysis in order to reduce the number of false alarms. The second stage algorithm is based on machine learning and applied to highly discriminant features. The latter are selected using powerful feature selection algorithms where the input is 12000 features extracted from each entry of a large activity dataset. The proposed system achieved an accuracy of 100% when evaluated on a 2400-file dataset. Moreover, the feasibility of the proposed system has been validated in real world conditions where it has been realized and tested using a smart watch and a server.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A sparse and prior based method for 3D image denoising.\n \n \n \n \n\n\n \n Abascal, J. F. P. J.; Si-Mohamed, S.; Douek, P.; Chappard, C.; and Peyrin, F.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902564,\n  author = {J. F. P. J. Abascal and S. Si-Mohamed and P. Douek and C. Chappard and F. Peyrin},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A sparse and prior based method for 3D image denoising},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Denoising algorithms via sparse representation are among the state-of-the art for 2D image restoration. In this work, we propose a novel sparse and prior-based algorithm for 3D image denoising (SPADE). SPADE is a modification of total variation (TV) problem with an additional functional that promotes sparsity with respect to a prior image. The prior is obtained from the noisy image by combining information from neighbor slices. The functional is minimized using the split Bregman method, which leads to an efficient method for large scale 3D denoising, with computational cost given by three FFT per iteration. SPADE is compared to TV and dictionary learning on the Shepp-Logan phantom and on human knee data acquired on a spectral computerized tomography scanner. SPADE converges in approximately ten iterations and provides comparable or better results than the other methods. In addition, the exploitation of the prior image avoids the patchy, cartoon-like images provided by TV and provides a more natural texture.},\n  keywords = {computerised tomography;image denoising;image reconstruction;image restoration;image texture;iterative methods;medical image processing;phantoms;scale 3D denoising;SPADE;patchy cartoon-like images;sparse based method;3D image denoising;denoising algorithms;sparse representation;2D image restoration;prior-based algorithm;total variation problem;noisy image;split Bregman method;Shepp-Logan phantom;human knee data acquisition;spectral computerized tomography scanner;TV;Three-dimensional displays;Noise reduction;Noise measurement;Phantoms;Convergence;Image edge detection;image denoising;total variation;split Bregman;spectral CT},\n  doi = {10.23919/EUSIPCO.2019.8902564},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533599.pdf},\n}\n\n
\n
\n\n\n
\n Denoising algorithms via sparse representation are among the state-of-the art for 2D image restoration. In this work, we propose a novel sparse and prior-based algorithm for 3D image denoising (SPADE). SPADE is a modification of total variation (TV) problem with an additional functional that promotes sparsity with respect to a prior image. The prior is obtained from the noisy image by combining information from neighbor slices. The functional is minimized using the split Bregman method, which leads to an efficient method for large scale 3D denoising, with computational cost given by three FFT per iteration. SPADE is compared to TV and dictionary learning on the Shepp-Logan phantom and on human knee data acquired on a spectral computerized tomography scanner. SPADE converges in approximately ten iterations and provides comparable or better results than the other methods. In addition, the exploitation of the prior image avoids the patchy, cartoon-like images provided by TV and provides a more natural texture.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Complexity Scalable HEVC-to-AV1 Transcoding Based on Coding Tree Depth Inheritance.\n \n \n \n \n\n\n \n Borges, A.; Zatt, B.; Porto, M.; Agostini, L.; and Correa, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ComplexityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902565,\n  author = {A. Borges and B. Zatt and M. Porto and L. Agostini and G. Correa},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Complexity Scalable HEVC-to-AV1 Transcoding Based on Coding Tree Depth Inheritance},\n  year = {2019},\n  pages = {1-5},\n  abstract = {With the advent of the recently launched AOMedia Video 1 (AV1) bitstream specification, there is currently a need for converting legacy content encoded with the state-of-the-art High Efficiency Video Coding (HEVC) standard to the new format. However, transcoding is a complex task composed of a decoding and an encoding process in sequence, which requires long processing time and high energy consumption. This paper proposes a complexity scalable HEVC-to-AV1 transcoding solution with three operation modes, which allow adjusting the tradeoff between encoding efficiency and computational cost. The solution is based on the high correlation between block size decisions in HEVC and AV1, allowing the AV1 encoder to inherit Coding Tree depth information from the HEVC bitstream to constrain the AV1 re-encoding process. Experimental results for the three operation modes show a transcoding time reduction between 35.4% and 69.5% at the cost of a compression efficiency loss that varies between 4.9% and 16.8%.},\n  keywords = {data compression;transcoding;video coding;high efficiency video coding standard;complexity scalable HEVC-to-AV1 transcoding;AV1 encoder;HEVC bitstream;AV1 re-encoding process;AOMedia Video 1 bitstream specification;coding tree depth inheritance;Transcoding;Complexity theory;Correlation;Streaming media;High efficiency video coding;Copper;transcoding;HEVC;AV1;video coding;complexity reduction},\n  doi = {10.23919/EUSIPCO.2019.8902565},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532549.pdf},\n}\n\n
\n
\n\n\n
\n With the advent of the recently launched AOMedia Video 1 (AV1) bitstream specification, there is currently a need for converting legacy content encoded with the state-of-the-art High Efficiency Video Coding (HEVC) standard to the new format. However, transcoding is a complex task composed of a decoding and an encoding process in sequence, which requires long processing time and high energy consumption. This paper proposes a complexity scalable HEVC-to-AV1 transcoding solution with three operation modes, which allow adjusting the tradeoff between encoding efficiency and computational cost. The solution is based on the high correlation between block size decisions in HEVC and AV1, allowing the AV1 encoder to inherit Coding Tree depth information from the HEVC bitstream to constrain the AV1 re-encoding process. Experimental results for the three operation modes show a transcoding time reduction between 35.4% and 69.5% at the cost of a compression efficiency loss that varies between 4.9% and 16.8%.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Active Acoustic Source Tracking Exploiting Particle Filtering and Monte Carlo Tree Search.\n \n \n \n \n\n\n \n Haubner, T.; Schmidt, A.; and Kellermann, W.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ActivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902566,\n  author = {T. Haubner and A. Schmidt and W. Kellermann},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Active Acoustic Source Tracking Exploiting Particle Filtering and Monte Carlo Tree Search},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we address the task of active acoustic source tracking as part of robotic path planning. It denotes the planning of sequences of robotic movements to enhance tracking results of acoustic sources, e.g., talking humans, by fusing observations from multiple positions. Essentially, two strategies are possible: short-term planning, which results in greedy behavior, and long-term planning, which considers a sequence of possible future movements of the robot and the source. Here, we focus on the second method as it might improve tracking performance compared to greedy behavior and propose a path planning algorithm which exploits Monte Carlo Tree Search (MCTS) and particle filtering, based on a reward motivated by information-theoretic considerations. By representing the state posterior by weighted particles, we are capable of modelling arbitrary probability density functions (PDF)s and dealing with highly nonlinear state-space models.},\n  keywords = {mobile robots;Monte Carlo methods;particle filtering (numerical methods);path planning;tree searching;robotic path planning;robotic movements;greedy behavior;tracking performance;weighted particles;particle filtering;active acoustic source tracking;Monte Carlo tree search;MCTS;Robot sensing systems;Planning;Path planning;Monte Carlo methods;Microphones;Acoustics;Active source tracking;particle filter;Monte Carlo tree search;path planning},\n  doi = {10.23919/EUSIPCO.2019.8902566},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531175.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we address the task of active acoustic source tracking as part of robotic path planning. It denotes the planning of sequences of robotic movements to enhance tracking results of acoustic sources, e.g., talking humans, by fusing observations from multiple positions. Essentially, two strategies are possible: short-term planning, which results in greedy behavior, and long-term planning, which considers a sequence of possible future movements of the robot and the source. Here, we focus on the second method as it might improve tracking performance compared to greedy behavior and propose a path planning algorithm which exploits Monte Carlo Tree Search (MCTS) and particle filtering, based on a reward motivated by information-theoretic considerations. By representing the state posterior by weighted particles, we are capable of modelling arbitrary probability density functions (PDF)s and dealing with highly nonlinear state-space models.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Self-supervised Attention Model for Weakly Labeled Audio Event Classification.\n \n \n \n \n\n\n \n Kim, B.; and Ghaffarzadegan, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Self-supervisedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902567,\n  author = {B. Kim and S. Ghaffarzadegan},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Self-supervised Attention Model for Weakly Labeled Audio Event Classification},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We describe a novel weakly labeled Audio Event Classification approach based on a self-supervised attention model. The weakly labeled framework is used to eliminate the need for expensive data labeling procedure and self-supervised attention is deployed to help a model distinguish between relevant and irrelevant parts of a weakly labeled audio clip in a more effective manner compared to prior attention models. We also propose a highly effective strongly supervised attention model when strong labels are available. This model also serves as an upper bound for the self-supervised model. The performances of the model with self-supervised attention training are comparable to the strongly supervised one which is trained using strong labels. We show that our self-supervised attention method is especially beneficial for short audio events. We achieve 8.8% and 17.6% relative mean average precision improvements over the current state-of-the-art systems for SL-DCASE-17and balanced AudioSet.},\n  keywords = {acoustic signal processing;audio signal processing;signal classification;self-supervised attention model;weakly labeled framework;expensive data labeling procedure;weakly labeled audio clip;prior attention models;highly effective strongly supervised attention model;strong labels;self-supervised model;self-supervised attention training;self-supervised attention method;short audio events;weakly labeled audio event classification;audio event classification approach;Training;Data models;Computational modeling;Task analysis;YouTube;Computer architecture;Europe;Weakly labeled audio classification;attention model;deep learning},\n  doi = {10.23919/EUSIPCO.2019.8902567},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532943.pdf},\n}\n\n
\n
\n\n\n
\n We describe a novel weakly labeled Audio Event Classification approach based on a self-supervised attention model. The weakly labeled framework is used to eliminate the need for expensive data labeling procedure and self-supervised attention is deployed to help a model distinguish between relevant and irrelevant parts of a weakly labeled audio clip in a more effective manner compared to prior attention models. We also propose a highly effective strongly supervised attention model when strong labels are available. This model also serves as an upper bound for the self-supervised model. The performances of the model with self-supervised attention training are comparable to the strongly supervised one which is trained using strong labels. We show that our self-supervised attention method is especially beneficial for short audio events. We achieve 8.8% and 17.6% relative mean average precision improvements over the current state-of-the-art systems for SL-DCASE-17and balanced AudioSet.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection, Enumeration and Localization of Underwater Acoustic Sources.\n \n \n \n \n\n\n \n Nagesha, P. V.; Anand, G. V.; Kalyanasundaram, N.; and Gurugopinath, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Detection,Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902568,\n  author = {P. V. Nagesha and G. V. Anand and N. Kalyanasundaram and S. Gurugopinath},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Detection, Enumeration and Localization of Underwater Acoustic Sources},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The problem of passive detection, enumeration and localization of underwater objects is of great interest in several applications of sonar. Source localization requires prior knowledge of the number of sources. Therefore, detection is followed by estimation of the number of sources, and finally localization. Normally, each of the aforementioned objectives is treated as a different problem, and separate processing techniques are employed to solve them. In this paper, we propose two techniques, viz. the embedded subspace detector (ESD) and the Bartlett processor detector (BPD), for joint detection and enumeration of underwater acoustic sources using a vertical linear array of sensors. The BPD is also capable of simultaneously estimating the range and depth of each source. Simulation results indicate that the proposed detectors can achieve good detection and enumeration at low signal-to-noise ratio, and that their enumeration performance compares very favorably with those of established source enumeration techniques such as Akaike information criterion and minimum description length.},\n  keywords = {acoustic signal processing;array signal processing;underwater acoustic communication;underwater sound;underwater acoustic sources;underwater objects;source localization;separate processing techniques;embedded subspace detector;Bartlett processor detector;BPD;established source enumeration techniques;Detectors;Sensor arrays;Electrostatic discharges;Array signal processing;Oceans;Covariance matrices},\n  doi = {10.23919/EUSIPCO.2019.8902568},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533755.pdf},\n}\n\n
\n
\n\n\n
\n The problem of passive detection, enumeration and localization of underwater objects is of great interest in several applications of sonar. Source localization requires prior knowledge of the number of sources. Therefore, detection is followed by estimation of the number of sources, and finally localization. Normally, each of the aforementioned objectives is treated as a different problem, and separate processing techniques are employed to solve them. In this paper, we propose two techniques, viz. the embedded subspace detector (ESD) and the Bartlett processor detector (BPD), for joint detection and enumeration of underwater acoustic sources using a vertical linear array of sensors. The BPD is also capable of simultaneously estimating the range and depth of each source. Simulation results indicate that the proposed detectors can achieve good detection and enumeration at low signal-to-noise ratio, and that their enumeration performance compares very favorably with those of established source enumeration techniques such as Akaike information criterion and minimum description length.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A parallel sparse regularization method for structured multilinear low-rank tensor decomposition.\n \n \n \n \n\n\n \n Kushe, G.; Yang, Y.; Steffens, C.; and Pesavento, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902569,\n  author = {G. Kushe and Y. Yang and C. Steffens and M. Pesavento},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A parallel sparse regularization method for structured multilinear low-rank tensor decomposition},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper we consider the structured multilinear low-rank tensor decomposition problem where group sparsity is enforced using nuclear norm regularization. We adopt the recently proposed sequential convex approximation approach to develop an optimization algorithm suitable for implementation on modern parallel hardware architectures. An existing optimization algorithm for this non-convex and non-differentable optimization problem relies on a lifting approach. For large problem dimensions the lifting procedure is, however, inefficient as it drastically increases the number of optimization variables. Our proposed algorithm does not require lifting and directly operates on the original parameters space. We demonstrate the performance gains in terms of convergence speed of the proposed sparse tensor decomposition method for the example of two dimensional harmonic retrieval.},\n  keywords = {approximation theory;convergence of numerical methods;convex programming;mathematics computing;matrix algebra;parallel processing;tensors;sequential convex approximation approach;modern parallel hardware architectures;nonconvex optimization;nondifferentable optimization problem;lifting procedure;optimization variables;sparse tensor decomposition method;parallel sparse regularization method;low-rank tensor decomposition problem;group sparsity;nuclear norm regularization;Harmonic analysis;Tensors;Optimization;Signal processing algorithms;Approximation algorithms;Europe},\n  doi = {10.23919/EUSIPCO.2019.8902569},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533835.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we consider the structured multilinear low-rank tensor decomposition problem where group sparsity is enforced using nuclear norm regularization. We adopt the recently proposed sequential convex approximation approach to develop an optimization algorithm suitable for implementation on modern parallel hardware architectures. An existing optimization algorithm for this non-convex and non-differentable optimization problem relies on a lifting approach. For large problem dimensions the lifting procedure is, however, inefficient as it drastically increases the number of optimization variables. Our proposed algorithm does not require lifting and directly operates on the original parameters space. We demonstrate the performance gains in terms of convergence speed of the proposed sparse tensor decomposition method for the example of two dimensional harmonic retrieval.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sensor Fusion for Learning-based Tracking of Controller Movement in Virtual Reality.\n \n \n \n \n\n\n \n Song, C.; and Zarar, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SensorPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902570,\n  author = {C. Song and S. Zarar},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Sensor Fusion for Learning-based Tracking of Controller Movement in Virtual Reality},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Inside-out pose tracking of hand-held controllers is an important problem in virtual reality devices. Current state-of-the-art combines a constellation of light-emitting diodes on controllers with a stereo pair of cameras on the head-mounted display (HMD) to track pose. These vision-based systems are unable to track controllers when they move out of the camera's field-of-view (out-of-FOV). To overcome this limitation, we employ sensor fusion and a learning-based model. Specifically, we employ ultrasound sensors on the HMD and controllers to obtain ranging information. We combine this information with predictions from an auto-regressive forecasting model that is built with a recurrent neural network. The combination is achieved via a Kalman filter across different positional states (including out-of-FOV). With the proposed approach, we demonstrate near-isotropic accuracy levels (~1.23 cm error) in estimating controller position, which was not possible to achieve before with camera-alone tracking.},\n  keywords = {cameras;computer vision;helmet mounted displays;image fusion;Kalman filters;learning (artificial intelligence);recurrent neural nets;stereo image processing;virtual reality;head-mounted display;HMD;vision-based systems;field-of-view;out-of-FOV;sensor fusion;learning-based model;ranging information;recurrent neural network;controller position;learning-based tracking;controller movement;pose tracking;hand-held controllers;virtual reality devices;light-emitting diodes;positional states;autoregressive forecasting model;Ultrasonic imaging;Tracking;Resists;Distance measurement;Predictive models;Synchronization;Transmitters;Ultrasound Ranging;Virtual Reality;Pose Estimation;Autoregression;Recurrent Neural Networks},\n  doi = {10.23919/EUSIPCO.2019.8902570},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529472.pdf},\n}\n\n
\n
\n\n\n
\n Inside-out pose tracking of hand-held controllers is an important problem in virtual reality devices. Current state-of-the-art combines a constellation of light-emitting diodes on controllers with a stereo pair of cameras on the head-mounted display (HMD) to track pose. These vision-based systems are unable to track controllers when they move out of the camera's field-of-view (out-of-FOV). To overcome this limitation, we employ sensor fusion and a learning-based model. Specifically, we employ ultrasound sensors on the HMD and controllers to obtain ranging information. We combine this information with predictions from an auto-regressive forecasting model that is built with a recurrent neural network. The combination is achieved via a Kalman filter across different positional states (including out-of-FOV). With the proposed approach, we demonstrate near-isotropic accuracy levels ( 1.23 cm error) in estimating controller position, which was not possible to achieve before with camera-alone tracking.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Lip-Reading with Limited-Data Network.\n \n \n \n \n\n\n \n Fernandez-Lopez, A.; and Sukno, F. M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Lip-ReadingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902572,\n  author = {A. Fernandez-Lopez and F. M. Sukno},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Lip-Reading with Limited-Data Network},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The development of Automatic Lip-Reading (ALR) systems is currently dominated by Deep Learning (DL) approaches. However, DL systems generally face two main issues related to the amount of data and the complexity of the model. To find a balance between the amount of available training data and the number of parameters of the model, in this work we introduce an end-to-end ALR system that combines CNNs and LSTMs and can be trained without large-scale databases. To this end, we propose to split the training by modules, by automatically generating weak labels per frames, termed visual units. These weak visual units are representative enough to guide the CNN to extract meaningful features that when combined with the context provided by the temporal module, are sufficiently informative to train an ALR system in a very short time and with no need for manual labeling. The system is evaluated in the well-known OuluVS2 database to perform sentence-level classification. We obtain an accuracy of 91.38% which is comparable to state-of the-art results but, differently from most previous approaches, we do not require the use of external training data.},\n  keywords = {feature extraction;image classification;image motion analysis;learning (artificial intelligence);DL systems;end-to-end ALR system;large-scale databases;OuluVS2 database;limited-data network;automatic lip-reading systems;deep learning;sentence-level classification;Visualization;Databases;Training;Feature extraction;Training data;Data models;Labeling;Lip-reading;Visual Speech;Deep Learning},\n  doi = {10.23919/EUSIPCO.2019.8902572},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532622.pdf},\n}\n\n
\n
\n\n\n
\n The development of Automatic Lip-Reading (ALR) systems is currently dominated by Deep Learning (DL) approaches. However, DL systems generally face two main issues related to the amount of data and the complexity of the model. To find a balance between the amount of available training data and the number of parameters of the model, in this work we introduce an end-to-end ALR system that combines CNNs and LSTMs and can be trained without large-scale databases. To this end, we propose to split the training by modules, by automatically generating weak labels per frames, termed visual units. These weak visual units are representative enough to guide the CNN to extract meaningful features that when combined with the context provided by the temporal module, are sufficiently informative to train an ALR system in a very short time and with no need for manual labeling. The system is evaluated in the well-known OuluVS2 database to perform sentence-level classification. We obtain an accuracy of 91.38% which is comparable to state-of the-art results but, differently from most previous approaches, we do not require the use of external training data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Virtual Adversarial Training for Semi-supervised Verification Tasks.\n \n \n \n \n\n\n \n Noroozi, V.; Bahaadini, S.; Zheng, L.; Xie, S.; and Yu, P. S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"VirtualPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902573,\n  author = {V. Noroozi and S. Bahaadini and L. Zheng and S. Xie and P. S. Yu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Virtual Adversarial Training for Semi-supervised Verification Tasks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The goal in verification tasks is to determine the similarity of two samples or verifies if they belong to the same category or not. In this paper, we propose a semi-supervised embedding technique for verification tasks using deep neural networks. The proposed model exploits the unlabeled data by making the model robust to the perturbation of the input with virtual adversarial training. It increases the generalization of the embedding function and prevents overfitting which are crucial in verification tasks. The proposed algorithm, named VerVAT, is evaluated on several verification tasks and compared with state-of-the-art algorithms. Experiments show the effectiveness of VerVAT especially in cases where limited labeled data is available.},\n  keywords = {learning (artificial intelligence);neural nets;program verification;virtual adversarial training;semisupervised verification tasks;semisupervised embedding technique;deep neural networks;unlabeled data;VerVAT;embedding function generalization;Task analysis;Training;Neural networks;Face recognition;Computer science;Data models;Perturbation methods;Verification Task;Semi-supervised Learning;Deep Representation Learning;Virtual Adversarial Training},\n  doi = {10.23919/EUSIPCO.2019.8902573},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533753.pdf},\n}\n\n
\n
\n\n\n
\n The goal in verification tasks is to determine the similarity of two samples or verifies if they belong to the same category or not. In this paper, we propose a semi-supervised embedding technique for verification tasks using deep neural networks. The proposed model exploits the unlabeled data by making the model robust to the perturbation of the input with virtual adversarial training. It increases the generalization of the embedding function and prevents overfitting which are crucial in verification tasks. The proposed algorithm, named VerVAT, is evaluated on several verification tasks and compared with state-of-the-art algorithms. Experiments show the effectiveness of VerVAT especially in cases where limited labeled data is available.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Measurement-based Wideband Radio Channel Characterization in an Underground Parking Lot.\n \n \n \n \n\n\n \n Miao, Y.; Wang, W.; Rodríguez-Pinñeiro, J.; Domínguez-Bolaño, T.; and Gong, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Measurement-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902576,\n  author = {Y. Miao and W. Wang and J. Rodríguez-Pinñeiro and T. Domínguez-Bolaño and Y. Gong},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Measurement-based Wideband Radio Channel Characterization in an Underground Parking Lot},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper reports the wideband radio channel measurement and characterization in an underground parking lot at frequency band from 4.9 GHz to 5.9 GHz. Single-input single-output and virtual single-input multiple-output channels are measured point-to-point by the Vector Network Analyzer based system. The measurements include both obstructed line-of-sight and non-line-of-sight scenarios, include influences of not only vehicles but also pillars which are commonly seen in underground parking. The radio channel modeling parameters, including path loss, delay spread, coherence bandwidth, power delay profile, reverberation time and power angular delay profile are summarized and analyzed. It is interesting to observe that due to the existence of objects like vehicles, pillars, and ceiling, the radio channels measured in underground parking presents rich spatial diversity even within a small area less than 50 m2.},\n  keywords = {indoor radio;microwave measurement;microwave propagation;MIMO communication;network analysers;ultra wideband communication;underground communication;wireless channels;measurement-based wideband radio channel characterization;underground parking lot;wideband radio channel measurement;frequency band;single-input single-output;virtual single-input multiple-output channels;Vector Network Analyzer based system;line-of-sight;nonline-of-sight scenarios;radio channel modeling parameters;power delay profile;power angular delay profile;radio channels;underground parking presents rich spatial diversity;frequency 4.9 GHz to 5.9 GHz;Delays;Automobiles;Wideband;Reverberation;Market research;Indexes;Presence network agents;Radio propagation channel;wideband;underground parking},\n  doi = {10.23919/EUSIPCO.2019.8902576},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533054.pdf},\n}\n\n
\n
\n\n\n
\n This paper reports the wideband radio channel measurement and characterization in an underground parking lot at frequency band from 4.9 GHz to 5.9 GHz. Single-input single-output and virtual single-input multiple-output channels are measured point-to-point by the Vector Network Analyzer based system. The measurements include both obstructed line-of-sight and non-line-of-sight scenarios, include influences of not only vehicles but also pillars which are commonly seen in underground parking. The radio channel modeling parameters, including path loss, delay spread, coherence bandwidth, power delay profile, reverberation time and power angular delay profile are summarized and analyzed. It is interesting to observe that due to the existence of objects like vehicles, pillars, and ceiling, the radio channels measured in underground parking presents rich spatial diversity even within a small area less than 50 m2.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Privacy-Preserving Distributed Average Consensus based on Additive Secret Sharing.\n \n \n \n \n\n\n \n Li, Q.; Cascudo, I.; and Christensen, M. G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Privacy-PreservingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902577,\n  author = {Q. Li and I. Cascudo and M. G. Christensen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Privacy-Preserving Distributed Average Consensus based on Additive Secret Sharing},\n  year = {2019},\n  pages = {1-5},\n  abstract = {One major concern of distributed computation in networks is the privacy of the individual nodes. To address this privacy issue in the context of the distributed average consensus problem, we propose a general, yet simple solution that achieves privacy using additive secret sharing, a tool from secure multiparty computation. This method enables each node to reach the consensus accurately and obtains perfect security at the same time. Unlike differential privacy based approaches, there is no trade-off between privacy and accuracy. Moreover, the proposed method is computationally simple compared to other techniques in secure multiparty computation, and it is able to achieve perfect security of any honest node as long as it has one honest neighbour under the honest-but-curious model, without any trusted third party.},\n  keywords = {data privacy;additive secret sharing;distributed computation;secure multiparty computation;privacy-preserving distributed average consensus problem;differential privacy based approaches;honest-but-curious model;trusted third party;Cryptography;Additives;Privacy;Signal processing algorithms;Differential privacy;Europe;Signal processing;Distributed average consensus;additive secret sharing;privacy preserving;secure multiparty computation},\n  doi = {10.23919/EUSIPCO.2019.8902577},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529404.pdf},\n}\n\n
\n
\n\n\n
\n One major concern of distributed computation in networks is the privacy of the individual nodes. To address this privacy issue in the context of the distributed average consensus problem, we propose a general, yet simple solution that achieves privacy using additive secret sharing, a tool from secure multiparty computation. This method enables each node to reach the consensus accurately and obtains perfect security at the same time. Unlike differential privacy based approaches, there is no trade-off between privacy and accuracy. Moreover, the proposed method is computationally simple compared to other techniques in secure multiparty computation, and it is able to achieve perfect security of any honest node as long as it has one honest neighbour under the honest-but-curious model, without any trusted third party.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Temporal convolutional networks for musical audio beat tracking.\n \n \n \n \n\n\n \n MatthewDavies, E. P.; and Böck, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TemporalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902578,\n  author = {E. P. MatthewDavies and S. Böck},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Temporal convolutional networks for musical audio beat tracking},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We propose the use of Temporal Convolutional Networks for audio-based beat tracking. By contrasting our convolutional approach with the current state-of-the-art recurrent approach using Bidirectional Long Short-Term Memory, we demonstrate three highly promising attributes of TCNs for music analysis, namely: i) they achieve state-of-the-art performance on a wide range of existing beat tracking datasets, ii) they are well suited to parallelisation and thus can be trained efficiently even on very large training data; and iii) they require a small number of weights.},\n  keywords = {audio signal processing;convolutional neural nets;music;recurrent neural nets;musical audio beat tracking;temporal convolutional networks;audio-based beat tracking;convolutional approach;bidirectional long short-term memory;music analysis;beat tracking datasets;Convolution;Spectrogram;Music;Task analysis;Training;Training data;Beat Tracking;Music Signal Processing;Convolutional Neural Networks},\n  doi = {10.23919/EUSIPCO.2019.8902578},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533824.pdf},\n}\n\n
\n
\n\n\n
\n We propose the use of Temporal Convolutional Networks for audio-based beat tracking. By contrasting our convolutional approach with the current state-of-the-art recurrent approach using Bidirectional Long Short-Term Memory, we demonstrate three highly promising attributes of TCNs for music analysis, namely: i) they achieve state-of-the-art performance on a wide range of existing beat tracking datasets, ii) they are well suited to parallelisation and thus can be trained efficiently even on very large training data; and iii) they require a small number of weights.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Coverage Analysis of Relay Assisted V2I Communication in Microcellular Urban Networks.\n \n \n \n \n\n\n \n Elbal, B. R.; Müller, M. K.; Schwarz, S.; and Rupp, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CoveragePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902579,\n  author = {B. R. Elbal and M. K. Müller and S. Schwarz and M. Rupp},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Coverage Analysis of Relay Assisted V2I Communication in Microcellular Urban Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Vehicular communications have a wide range of applications and are quickly growing in last years. In our work, we investigate vehicular communications to improve the Vehicle-to-infrastructure (V2I) link and enhance the performance of the entire network. We consider a microcellular urban network and compare the coverage probability of the V2I link to the relay-assisted link. We deploy a Manhattan grid where the streets are deployed according to a Poisson line process (PLP), and microcell base stations (BSs) and vehicular users are placed according to a Poisson point process (PPP). We analyze the relay position that maximizes the coverage improvement depending on the density of streets, BSs and vehicular users. We derive analytical expressions for the coverage probability of both direct and relay-assisted links by exploiting tools from stochastic geometry and perform Monte Carlo system level simulations of our model.},\n  keywords = {microcellular radio;Monte Carlo methods;probability;radio links;relay networks (telecommunication);stochastic processes;vehicular ad hoc networks;vehicular communications;vehicle-to-infrastructure;microcellular urban network;coverage probability;relay-assisted link;Poisson line process;microcell base stations;BSs;vehicular users;Poisson point process;relay position;coverage improvement;relay assisted V2I communication;Interference;Relays;Transmitters;Signal to noise ratio;Receivers;Europe;vehicle-to-vehicle communications;stochastic geometry;system level simulation;coverage improvement},\n  doi = {10.23919/EUSIPCO.2019.8902579},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533680.pdf},\n}\n\n
\n
\n\n\n
\n Vehicular communications have a wide range of applications and are quickly growing in last years. In our work, we investigate vehicular communications to improve the Vehicle-to-infrastructure (V2I) link and enhance the performance of the entire network. We consider a microcellular urban network and compare the coverage probability of the V2I link to the relay-assisted link. We deploy a Manhattan grid where the streets are deployed according to a Poisson line process (PLP), and microcell base stations (BSs) and vehicular users are placed according to a Poisson point process (PPP). We analyze the relay position that maximizes the coverage improvement depending on the density of streets, BSs and vehicular users. We derive analytical expressions for the coverage probability of both direct and relay-assisted links by exploiting tools from stochastic geometry and perform Monte Carlo system level simulations of our model.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improved Lossless Image Compression Using Adaptive Image Rotation.\n \n \n \n \n\n\n \n Möller, P.; and Strutz, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-4, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902580,\n  author = {P. Möller and T. Strutz},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Improved Lossless Image Compression Using Adaptive Image Rotation},\n  year = {2019},\n  pages = {1-4},\n  abstract = {State-of-the-art compression schemes generally have the ability to adapt themselves to the properties of the data to be compressed. This is a kind of learning process and requires the modification of internal variables influencing the treatment of subsequent data segments. In other words: the order of processing has an impact on the compression performance. In image compression, the order can be changed, for example, by rotating the input image by 90°, 180°, or 270°. In application to lossless screen content compression, investigations with different compression schemes (LOCO-I, HEVC, FP8v3, and SCF) have shown that the rotation has a considerable impact on the compression performance. The difficulty, however, is to predict the best rotation. For the SCF (soft context formation) compression scheme, we have developed a method based on a tiny neural network that suggests a suitable rotation by evaluating basic colour properties of the image to be compressed. The compression can be improved by 0.7098% to 1.3817% depending on the image set tested.},\n  keywords = {data compression;image coding;image colour analysis;image segmentation;neural nets;lossless image compression;adaptive image rotation;learning process;SCF compression scheme;soft context formation;neural network;colour properties;Image coding;Training;Image color analysis;Neurons;Histograms;Biological neural networks;Signal processing;lossless image compression;predictive modelling;SCF;processing order;predictive analytics},\n  doi = {10.23919/EUSIPCO.2019.8902580},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528081.pdf},\n}\n\n
\n
\n\n\n
\n State-of-the-art compression schemes generally have the ability to adapt themselves to the properties of the data to be compressed. This is a kind of learning process and requires the modification of internal variables influencing the treatment of subsequent data segments. In other words: the order of processing has an impact on the compression performance. In image compression, the order can be changed, for example, by rotating the input image by 90°, 180°, or 270°. In application to lossless screen content compression, investigations with different compression schemes (LOCO-I, HEVC, FP8v3, and SCF) have shown that the rotation has a considerable impact on the compression performance. The difficulty, however, is to predict the best rotation. For the SCF (soft context formation) compression scheme, we have developed a method based on a tiny neural network that suggests a suitable rotation by evaluating basic colour properties of the image to be compressed. The compression can be improved by 0.7098% to 1.3817% depending on the image set tested.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectrum Insensitive Sparse Recovery with Iterative Affine Projections.\n \n \n \n \n\n\n \n Cleju, N.; and Ciocoiu, I. B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SpectrumPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902581,\n  author = {N. Cleju and I. B. Ciocoiu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Spectrum Insensitive Sparse Recovery with Iterative Affine Projections},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We propose a novel greedy algorithm for ℓ0-based sparse signal recovery, inspired by Iterative Hard Thresholding, which alternates a gradient descent step towards minimizing the sparsity error with a projection step on the affine solution space y = Ax. We provide a theoretical guarantee based on Restricted Isometry Property for successful recovery of exact sparse signals, in the noiseless case, which does not depend on the singular values spectrum of the dictionary. This improves signal recovery by providing robustness in case of ill-conditioned dictionaries, as learned and coherent dictionaries tend to be. Simulation results on noiseless exact-sparse recovery indicate improvements compared to similar algorithms, especially in the case of ill-conditioned dictionaries.},\n  keywords = {gradient methods;greedy algorithms;iterative methods;signal processing;greedy algorithm;sparse signal recovery;iterative hard thresholding;gradient descent step;restricted isometry property;singular values spectrum;ill-conditioned dictionaries;learned dictionaries;spectrum insensitive sparse recovery;iterative affine projections;Dictionaries;Signal processing algorithms;Null space;Iterative algorithms;Sparse matrices;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2019.8902581},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533108.pdf},\n}\n\n
\n
\n\n\n
\n We propose a novel greedy algorithm for ℓ0-based sparse signal recovery, inspired by Iterative Hard Thresholding, which alternates a gradient descent step towards minimizing the sparsity error with a projection step on the affine solution space y = Ax. We provide a theoretical guarantee based on Restricted Isometry Property for successful recovery of exact sparse signals, in the noiseless case, which does not depend on the singular values spectrum of the dictionary. This improves signal recovery by providing robustness in case of ill-conditioned dictionaries, as learned and coherent dictionaries tend to be. Simulation results on noiseless exact-sparse recovery indicate improvements compared to similar algorithms, especially in the case of ill-conditioned dictionaries.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Kernel Based Online Change Point Detection.\n \n \n \n \n\n\n \n Bouchikhi, I.; Ferrari, A.; Richard, C.; Bourrier, A.; and Bernot, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"KernelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 5 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902582,\n  author = {I. Bouchikhi and A. Ferrari and C. Richard and A. Bourrier and M. Bernot},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Kernel Based Online Change Point Detection},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Detecting change points in time series data is a challenging problem, in particular when no prior information on the data distribution and the nature of the change is available. In a former work, we introduced an online non-parametric change-point detection framework built upon direct density ratio estimation over two consecutive time segments, rather than modeling densities separately. This algorithm based on the theory of reproducing kernels showed positive and reliable detection results for a variety of problems. To further improve the detection performance of this approach, we propose in this paper to modify the original cost function in order to achieve unbiasedness of the density ratio estimation under the null hypothesis. Theoretical analysis and numerical simulations confirm the improved behavior of this method, as well as its efficiency compared to a state of the art one. Application to sentiment change detection in Twitter data streams is also presented.},\n  keywords = {estimation theory;sentiment analysis;social networking (online);statistical analysis;time series;detection performance;sentiment change detection;Twitter data streams;time series data;data distribution;nonparametric change-point detection framework;direct density ratio estimation;consecutive time segments;kernel based online change point detection;null hypothesis;Kernel;Signal processing algorithms;Dictionaries;Estimation;Change detection algorithms;Europe;Signal processing;Non-parametric change-point detection;reproducing kernel Hilbert space;kernel least-mean-square algorithm;online learning;convergence analysis},\n  doi = {10.23919/EUSIPCO.2019.8902582},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531342.pdf},\n}\n\n
\n
\n\n\n
\n Detecting change points in time series data is a challenging problem, in particular when no prior information on the data distribution and the nature of the change is available. In a former work, we introduced an online non-parametric change-point detection framework built upon direct density ratio estimation over two consecutive time segments, rather than modeling densities separately. This algorithm based on the theory of reproducing kernels showed positive and reliable detection results for a variety of problems. To further improve the detection performance of this approach, we propose in this paper to modify the original cost function in order to achieve unbiasedness of the density ratio estimation under the null hypothesis. Theoretical analysis and numerical simulations confirm the improved behavior of this method, as well as its efficiency compared to a state of the art one. Application to sentiment change detection in Twitter data streams is also presented.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A biologically constrained encoding solution for long-term storage of images onto synthetic DNA.\n \n \n \n \n\n\n \n Dimopoulou, M.; Antonini, M.; Barbry, P.; and Appuswamy, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902583,\n  author = {M. Dimopoulou and M. Antonini and P. Barbry and R. Appuswamy},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A biologically constrained encoding solution for long-term storage of images onto synthetic DNA},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Living in the age of the digital media explosion, the amount of data that is being stored increases dramatically. However, even if existing storage systems suggest efficiency in capacity, they are lacking in durability. Hard disks, flash, tape or even optical storage have limited lifespan in the range of 5 to 20 years. Interestingly, recent studies have proven that it was possible to use synthetic DNA for the storage of digital data, introducing a strong candidate to achieve data longevity. The DNA's biological properties allows the storage of a great amount of information into an extraordinary small volume while also promising efficient storage for centuries or even longer with no loss of information. However, encoding digital data onto DNA is not obvious, because when decoding, we have to face the problem of sequencing noise robustness. Furthermore, synthesizing DNA is an expensive process and thus, controlling the compression ratio by optimizing the rate-distortion trade-off is an important challenge we have to deal with. This work proposes a coding solution for the storage of digital images onto synthetic DNA. We developed a new encoding algorithm which generates a DNA code robust to biological errors coming from the synthesis and the sequencing processes. Furthermore, thanks to an optimized allocation process the solution is able to control the compression ratio and thus the length of the synthesized DNA strand. Results show an improvement in terms of coding potential compared to previous state-of-the-art works.},\n  keywords = {biocomputing;data compression;digital storage;DNA;image coding;image sequences;optimisation;rate distortion theory;biologically constrained encoding solution;long-term storage;synthetic DNA;digital media explosion;optical storage;data longevity;DNA's biological properties;compression ratio;coding solution;digital images;DNA code;DNA;Image coding;Biological information theory;Sequential analysis;Encoding;Memory;Quantization (signal);DNA data storage;image compression;robust encoding},\n  doi = {10.23919/EUSIPCO.2019.8902583},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533782.pdf},\n}\n\n
\n
\n\n\n
\n Living in the age of the digital media explosion, the amount of data that is being stored increases dramatically. However, even if existing storage systems suggest efficiency in capacity, they are lacking in durability. Hard disks, flash, tape or even optical storage have limited lifespan in the range of 5 to 20 years. Interestingly, recent studies have proven that it was possible to use synthetic DNA for the storage of digital data, introducing a strong candidate to achieve data longevity. The DNA's biological properties allows the storage of a great amount of information into an extraordinary small volume while also promising efficient storage for centuries or even longer with no loss of information. However, encoding digital data onto DNA is not obvious, because when decoding, we have to face the problem of sequencing noise robustness. Furthermore, synthesizing DNA is an expensive process and thus, controlling the compression ratio by optimizing the rate-distortion trade-off is an important challenge we have to deal with. This work proposes a coding solution for the storage of digital images onto synthetic DNA. We developed a new encoding algorithm which generates a DNA code robust to biological errors coming from the synthesis and the sequencing processes. Furthermore, thanks to an optimized allocation process the solution is able to control the compression ratio and thus the length of the synthesized DNA strand. Results show an improvement in terms of coding potential compared to previous state-of-the-art works.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rank estimation and tensor decomposition using physics-driven constraints for brain source localization.\n \n \n \n \n\n\n \n Taheri, N.; Kachenoura, A.; Karfoul, A.; Han, X.; Ansari-Asl, K.; Senhadji, I. M. L.; Senhadji, L.; and Albera, L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-4, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RankPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902585,\n  author = {N. Taheri and A. Kachenoura and A. Karfoul and X. Han and K. Ansari-Asl and I. M. L. Senhadji and L. Senhadji and L. Albera},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Rank estimation and tensor decomposition using physics-driven constraints for brain source localization},\n  year = {2019},\n  pages = {1-4},\n  abstract = {This paper deals with the tensor-based Brain Source Imaging (BSI) problem, say finding the precise location of distributed sources of interest by means of tensor decomposition. This requires to estimate accurately the rank of the considered tensor to be decomposed. Therefore, a two-step approach, named R-CPD-SISSY, is proposed including a rank estimation process and a source localization procedure. The first step consists in using a modified version of a recent method, which estimates both the rank and the loading matrices of a tensor following the canonical polyadic decomposition model. The second step uses a recent physics-driven tensor-based BSI method, named STS-SISSY, in order to localize the brain regions of interest. This second step uses the estimated rank during the first step. The performance of the R-CPD-SISSY algorithm is studied using realistic synthetic interictal epileptic recordings.},\n  keywords = {brain;electroencephalography;matrix algebra;medical signal processing;neurophysiology;tensors;tensor decomposition;physics-driven constraints;brain source localization;tensor-based Brain Source Imaging problem;precise location;considered tensor;two-step approach;named R-CPD-SISSY;rank estimation process;source localization procedure;canonical polyadic decomposition model;recent physics-driven tensor-based BSI method;named STS-SISSY;estimated rank;R-CPD-SISSY algorithm;Tensors;Signal to noise ratio;Electroencephalography;Estimation;Loading;Brain modeling;Europe},\n  doi = {10.23919/EUSIPCO.2019.8902585},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533920.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with the tensor-based Brain Source Imaging (BSI) problem, say finding the precise location of distributed sources of interest by means of tensor decomposition. This requires to estimate accurately the rank of the considered tensor to be decomposed. Therefore, a two-step approach, named R-CPD-SISSY, is proposed including a rank estimation process and a source localization procedure. The first step consists in using a modified version of a recent method, which estimates both the rank and the loading matrices of a tensor following the canonical polyadic decomposition model. The second step uses a recent physics-driven tensor-based BSI method, named STS-SISSY, in order to localize the brain regions of interest. This second step uses the estimated rank during the first step. The performance of the R-CPD-SISSY algorithm is studied using realistic synthetic interictal epileptic recordings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Validation framework for building a spectrum sharing testbed for integrated satellite-terrestrial systems : Invited Paper.\n \n \n \n\n\n \n Höyhtyä, M.; Hoppari, M.; and Majanen, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902587,\n  author = {M. Höyhtyä and M. Hoppari and M. Majanen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Validation framework for building a spectrum sharing testbed for integrated satellite-terrestrial systems : Invited Paper},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Development of a testbed for spectrum sharing is a multi-step process requiring systems engineering understanding. The ASCENT project is building a testbed to study licensed spectrum sharing between satellite systems and between satellite and terrestrial systems. The work has been carried out as part of the European Space Agency's (ESA) Advanced Research in Telecommunications Systems (ARTES) programme. The 5G frequency bands currently under study are the 3.6 GHz band and the 26 GHz band. Validation is needed to check that all the requirements are fulfilled and consequently to understand in detail how the spectrum could be shared in the studied bands and what the benefits of different techniques such as licensed shared access (LSA) and power control could be. This paper describes the validation framework that provides guidance for the actual validation and enables creation of an efficient validation plan for different use cases and frequency bands. We also define the architecture for the LSA testbed.},\n  keywords = {5G mobile communication;next generation networks;radio spectrum management;satellite communication;systems engineering;validation framework;integrated satellite-terrestrial systems;multistep process;systems engineering understanding;ASCENT project;licensed spectrum sharing;satellite systems;European Space Agency's Advanced Research;Telecommunications Systems programme;frequency bands;licensed shared access;LSA testbed;frequency 3.6 GHz;frequency 26.0 GHz;Satellite broadcasting;Base stations;Interference;5G mobile communication;Computer architecture;Measurement;Satellites;Spectrum databases;5G;cognitive networks},\n  doi = {10.23919/EUSIPCO.2019.8902587},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Development of a testbed for spectrum sharing is a multi-step process requiring systems engineering understanding. The ASCENT project is building a testbed to study licensed spectrum sharing between satellite systems and between satellite and terrestrial systems. The work has been carried out as part of the European Space Agency's (ESA) Advanced Research in Telecommunications Systems (ARTES) programme. The 5G frequency bands currently under study are the 3.6 GHz band and the 26 GHz band. Validation is needed to check that all the requirements are fulfilled and consequently to understand in detail how the spectrum could be shared in the studied bands and what the benefits of different techniques such as licensed shared access (LSA) and power control could be. This paper describes the validation framework that provides guidance for the actual validation and enables creation of an efficient validation plan for different use cases and frequency bands. We also define the architecture for the LSA testbed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n FPGA Implementation of a TVWS Up- and Downconverter Using Non-Power-of-Two FFT Modulated Filter Banks.\n \n \n \n \n\n\n \n Anis, V.; Guo, J.; Weiss, S.; and Crockett, L. H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FPGAPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902588,\n  author = {V. Anis and J. Guo and S. Weiss and L. H. Crockett},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {FPGA Implementation of a TVWS Up- and Downconverter Using Non-Power-of-Two FFT Modulated Filter Banks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper addresses an oversampled filter bank (OSFB) approach to up- and down-convert any or all of the 40 channels in the United Kingdom's TV white space (TVWS). We particularly consider the use of non-power-of-two fast Fourier transforms (FFTs), which provides a greater choice of design parameters over existing OSFB implementations. Using a field-programmable gate array (FPGA) software defined radio (SDR) platform, we compare two different 40-point FFT-based implementations of the system - one fully parallelised, one serialised - with an existing design using a radix-two 64-point FFT in terms of implementation cost and power consumption.},\n  keywords = {channel bank filters;fast Fourier transforms;field programmable gate arrays;software radio;FPGA implementation;TVWS;downconverter;nonpower-of-two FFT modulated filter banks;oversampled filter bank approach;United Kingdom;nonpower-of-two fast Fourier transforms;field-programmable gate array software defined radio platform;40-point FFT-based implementations;implementation cost;power consumption;TV white space;OSFB implementations;FPGA software defined radio platform;SDR;Discrete Fourier transforms;Field programmable gate arrays;Transceivers;Radio frequency;Baseband;Prototypes;Europe},\n  doi = {10.23919/EUSIPCO.2019.8902588},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534138.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses an oversampled filter bank (OSFB) approach to up- and down-convert any or all of the 40 channels in the United Kingdom's TV white space (TVWS). We particularly consider the use of non-power-of-two fast Fourier transforms (FFTs), which provides a greater choice of design parameters over existing OSFB implementations. Using a field-programmable gate array (FPGA) software defined radio (SDR) platform, we compare two different 40-point FFT-based implementations of the system - one fully parallelised, one serialised - with an existing design using a radix-two 64-point FFT in terms of implementation cost and power consumption.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Variational Bayes Color Deconvolution with a Total Variation Prior.\n \n \n \n \n\n\n \n Vega, M.; Mateos, J.; Molina, R.; and Katsaggelos, A. K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"VariationalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902589,\n  author = {M. Vega and J. Mateos and R. Molina and A. K. Katsaggelos},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Variational Bayes Color Deconvolution with a Total Variation Prior},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In digital brightfield microscopy, tissues are usually stained with two or more dyes. Color deconvolution aims at separating multi-stained images into single stained images. We formulate the blind color deconvolution problem within the Bayesian framework. Our model takes into account the similarity to a given reference color-vector matrix and spatial relations among the concentration pixels by a total variation prior. It utilizes variational inference and an evidence lower bound to estimate all the latent variables. The proposed algorithm is tested on real images and compared with classical and state-of-the-art color deconvolution algorithms.},\n  keywords = {Bayes methods;image colour analysis;inference mechanisms;medical image processing;Bayesian framework;reference color-vector matrix;variational inference;digital brightfield microscopy;single stained images;blind color deconvolution problem;variational Bayes color deconvolution;multistained images;Manganese;Image color analysis;Deconvolution;Bayes methods;TV;Europe;Signal processing;Blind color deconvolution;histopathological images;variational Bayes;total variation},\n  doi = {10.23919/EUSIPCO.2019.8902589},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533489.pdf},\n}\n\n
\n
\n\n\n
\n In digital brightfield microscopy, tissues are usually stained with two or more dyes. Color deconvolution aims at separating multi-stained images into single stained images. We formulate the blind color deconvolution problem within the Bayesian framework. Our model takes into account the similarity to a given reference color-vector matrix and spatial relations among the concentration pixels by a total variation prior. It utilizes variational inference and an evidence lower bound to estimate all the latent variables. The proposed algorithm is tested on real images and compared with classical and state-of-the-art color deconvolution algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On Realizing Distributed Deep Neural Networks: An Astrophysics Case Study.\n \n \n \n \n\n\n \n Aspri, M.; Tsagkatakis, G.; Panousopoulou, A.; and Tsakalides, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902592,\n  author = {M. Aspri and G. Tsagkatakis and A. Panousopoulou and P. Tsakalides},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {On Realizing Distributed Deep Neural Networks: An Astrophysics Case Study},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Deep Learning architectures are extensively adopted as the core machine learning framework in both industry and academia. With large amounts of data at their disposal, these architectures can autonomously extract highly descriptive features for any type of input signals. However, the extensive volume of data combined with the demand for high computational resources, are introducing new challenges in terms of computing platforms. The work herein presented explores the performance of Deep Learning in the field of astrophysics, when conducted on a distributed environment. To set up such an environment, we capitalize on TensorFlowOnSpark, which combines both TensorFlow's dataflow graphs and Spark's cluster management. We report on the performance of a CPU cluster, considering both the number of training nodes and data distribution, while quantifying their effects via the metrics of training accuracy and training loss. Our results indicate that distribution has a positive impact on Deep Learning, since it accelerates our network's convergence for a given number of epochs. However, network traffic adds a significant amount of overhead, rendering it suitable for mostly very deep models or in big Data Analytics.},\n  keywords = {astronomy computing;Big Data;data analysis;data flow graphs;learning (artificial intelligence);neural net architecture;distributed deep neural networks;astrophysics case study;core machine;highly descriptive features;input signals;high computational resources;distributed environment;TensorFlow's dataflow graphs;Spark's cluster management;CPU cluster;training nodes;data distribution;training accuracy;training loss;network traffic;deep models;big Data Analytics;deep learning architectures;TensorFlowOnSpark;Training;Sparks;Distributed databases;Europe;Signal processing;Data models;Computer architecture;Distributed Deep Learning;Convolutional Neural Networks;Spectroscopic Redshift Estimation},\n  doi = {10.23919/EUSIPCO.2019.8902592},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533634.pdf},\n}\n\n
\n
\n\n\n
\n Deep Learning architectures are extensively adopted as the core machine learning framework in both industry and academia. With large amounts of data at their disposal, these architectures can autonomously extract highly descriptive features for any type of input signals. However, the extensive volume of data combined with the demand for high computational resources, are introducing new challenges in terms of computing platforms. The work herein presented explores the performance of Deep Learning in the field of astrophysics, when conducted on a distributed environment. To set up such an environment, we capitalize on TensorFlowOnSpark, which combines both TensorFlow's dataflow graphs and Spark's cluster management. We report on the performance of a CPU cluster, considering both the number of training nodes and data distribution, while quantifying their effects via the metrics of training accuracy and training loss. Our results indicate that distribution has a positive impact on Deep Learning, since it accelerates our network's convergence for a given number of epochs. However, network traffic adds a significant amount of overhead, rendering it suitable for mostly very deep models or in big Data Analytics.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Learning Tensor-structured Dictionaries with Application to Hyperspectral Image Denoising.\n \n \n \n \n\n\n \n Dantas, C. F.; Cohen, J. E.; and Gribonval, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"LearningPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902593,\n  author = {C. F. Dantas and J. E. Cohen and R. Gribonval},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Learning Tensor-structured Dictionaries with Application to Hyperspectral Image Denoising},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Dictionary learning, paired with sparse coding, aims at providing sparse data representations, that can be used for multiple tasks such as denoising or inpainting, as well as dimensionality reduction. However, when working with large data sets, the dictionary obtained by applying unstructured dictionary learning methods may be of considerable size, which poses both memory and computational complexity issues. In this article, we show how a previously proposed structured dictionary learning model, HO-SuKro, can be used to obtain more compact and readily-applicable dictionaries when the targeted data is a collection of multiway arrays. We introduce an efficient alternating optimization learning algorithm, describe important implementation details that have a considerable impact on both algorithmic complexity and actual speed, and showcase the proposed algorithm on a hyperspectral image denoising task.},\n  keywords = {computational complexity;image coding;image denoising;image representation;learning (artificial intelligence);optimisation;tensors;tensor-structured dictionaries;sparse coding;sparse data representations;dimensionality reduction;data sets;unstructured dictionary learning methods;computational complexity issues;dictionary learning model;compact dictionaries;readily-applicable dictionaries;algorithmic complexity;hyperspectral image denoising task;optimization learning algorithm;HO-SuKro;Dictionaries;Signal processing algorithms;Tensors;Encoding;Machine learning;Complexity theory;Signal processing;Dictionary learning;Tensor;Kronecker product;Hyperspectral imaging;Denoising},\n  doi = {10.23919/EUSIPCO.2019.8902593},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533640.pdf},\n}\n\n
\n
\n\n\n
\n Dictionary learning, paired with sparse coding, aims at providing sparse data representations, that can be used for multiple tasks such as denoising or inpainting, as well as dimensionality reduction. However, when working with large data sets, the dictionary obtained by applying unstructured dictionary learning methods may be of considerable size, which poses both memory and computational complexity issues. In this article, we show how a previously proposed structured dictionary learning model, HO-SuKro, can be used to obtain more compact and readily-applicable dictionaries when the targeted data is a collection of multiway arrays. We introduce an efficient alternating optimization learning algorithm, describe important implementation details that have a considerable impact on both algorithmic complexity and actual speed, and showcase the proposed algorithm on a hyperspectral image denoising task.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Hardware Acceleration of Approximate Transform Module for the Versatile Video Coding Standard.\n \n \n \n\n\n \n Kammoun, A.; Hamidouche, W.; Philipp, P.; Belghith, F.; Massmoudi, N.; and Nezan, J. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902594,\n  author = {A. Kammoun and W. Hamidouche and P. Philipp and F. Belghith and N. Massmoudi and J. -F. Nezan},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Hardware Acceleration of Approximate Transform Module for the Versatile Video Coding Standard},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Versatile Video Coding (VVC) is the next generation video coding standard expected by the end of 2020. VVC introduces several new coding tools that enable better coding performance compared to the High Efficiency Video Coding (HEVC) standard. The Multiple Transform Selection (MTS) concept, as introduced in VVC, relies on three trigonometrical transforms, and at the encoder side, selects the couple of horizontal and vertical transforms that maximises the RateDistortion cost. However, the new Discrete Sine Transform (DST)VII and Discrete Cosine Transform (DCT)-VIII do not have fast computing algorithms and rely on matrix multiplication, which requires high hardware resources especially for large block sizes.This paper tackles the hardware implementation of an approximation of MTS module. This approximation consists in applying adjustment stages, based on sparse block-band matrices, to a variants of DCT-II family mainly DCT-II and its inverse. Therefore, an efficient 2D hardware implementation of the forward and inverse approximate transform module is proposed. The architecture design includes a pipelined and reconfigurable forward-inverse DCT-II core transform. A unified 2D implementation of 16 and 32-point forward-inverse DCTII, approximate DST-VII and DCT-VIII is also presented. The synthesis results show that the design is able to sustain 2K and 4K videos at 377 and 94 frames per second, respectively, while using only 18% of Alms, 40% of registers and 34% of Digital Signal Processing (DSP) blocks of the ArrialO SoC platform.},\n  keywords = {digital signal processing chips;discrete cosine transforms;matrix multiplication;rate distortion theory;system-on-chip;video coding;hardware acceleration;approximate transform module;versatile video coding standard;VVC;coding tools;coding performance;high efficiency video coding standard;trigonometrical transforms;horizontal transforms;vertical transforms;matrix multiplication;hardware resources;hardware implementation;sparse block-band matrices;forward-inverse DCT-II core;unified 2D implementation;32-point forward-inverse DCTII;DST-VII;DCT-VIII;multiple transform selection concept;Discrete cosine transforms;Two dimensional displays;Computer architecture;Hardware;Encoding;Sparse matrices;Versatile Video Coding;Hardware implementation;Approximation;DCT-II;DST-VII and DCT-VIII},\n  doi = {10.23919/EUSIPCO.2019.8902594},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Versatile Video Coding (VVC) is the next generation video coding standard expected by the end of 2020. VVC introduces several new coding tools that enable better coding performance compared to the High Efficiency Video Coding (HEVC) standard. The Multiple Transform Selection (MTS) concept, as introduced in VVC, relies on three trigonometrical transforms, and at the encoder side, selects the couple of horizontal and vertical transforms that maximises the RateDistortion cost. However, the new Discrete Sine Transform (DST)VII and Discrete Cosine Transform (DCT)-VIII do not have fast computing algorithms and rely on matrix multiplication, which requires high hardware resources especially for large block sizes.This paper tackles the hardware implementation of an approximation of MTS module. This approximation consists in applying adjustment stages, based on sparse block-band matrices, to a variants of DCT-II family mainly DCT-II and its inverse. Therefore, an efficient 2D hardware implementation of the forward and inverse approximate transform module is proposed. The architecture design includes a pipelined and reconfigurable forward-inverse DCT-II core transform. A unified 2D implementation of 16 and 32-point forward-inverse DCTII, approximate DST-VII and DCT-VIII is also presented. The synthesis results show that the design is able to sustain 2K and 4K videos at 377 and 94 frames per second, respectively, while using only 18% of Alms, 40% of registers and 34% of Digital Signal Processing (DSP) blocks of the ArrialO SoC platform.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Identify of Spatial Similarity of Electroencephalography (EEG) during Working-Memory Maintenance.\n \n \n \n \n\n\n \n Song, Y.; Zhang, Z.; Hu, T.; Gong, X.; and Nandi, A. K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"IdentifyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902595,\n  author = {Y. Song and Z. Zhang and T. Hu and X. Gong and A. K. Nandi},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Identify of Spatial Similarity of Electroencephalography (EEG) during Working-Memory Maintenance},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Working memory maintenance is one of the important procedures during working memory storage into long-term memory. This paper utilizes consensus clustering to analyze the spatial similarity amongst whole brain regions during working-memory maintenance processing with 128 channels of scalp Electroencephalography (EEG) records. This paper sets the methodology research to extract the similarity of spatial information processing on the larger brain system during working memory maintenance based on a data-driven method. Based on group analysis of 20 subjects, the EEG channels with similarities are extracted to illustrate the functional brain connectivity of material-specific memory maintenance. The power of alpha frequency band (8-12Hz) appears to provide the discriminative information of the material-specific memory maintenance.},\n  keywords = {electroencephalography;medical signal processing;neurophysiology;working-memory maintenance;memory storage;long-term memory;material-specific memory maintenance;electroencephalography;EEG channels;functional brain connectivity;alpha frequency band;frequency 8.0 Hz to 12.0 Hz;Consensus clustering;data-driven;spatial similarity;working memory maintenance;EEG},\n  doi = {10.23919/EUSIPCO.2019.8902595},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529075.pdf},\n}\n\n
\n
\n\n\n
\n Working memory maintenance is one of the important procedures during working memory storage into long-term memory. This paper utilizes consensus clustering to analyze the spatial similarity amongst whole brain regions during working-memory maintenance processing with 128 channels of scalp Electroencephalography (EEG) records. This paper sets the methodology research to extract the similarity of spatial information processing on the larger brain system during working memory maintenance based on a data-driven method. Based on group analysis of 20 subjects, the EEG channels with similarities are extracted to illustrate the functional brain connectivity of material-specific memory maintenance. The power of alpha frequency band (8-12Hz) appears to provide the discriminative information of the material-specific memory maintenance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compressed sensing for the extraction of atrial fibrillation patterns from surface electrocardiograms.\n \n \n \n \n\n\n \n Ghrissi, A.; and Zarzoso, V.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CompressedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902596,\n  author = {A. Ghrissi and V. Zarzoso},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Compressed sensing for the extraction of atrial fibrillation patterns from surface electrocardiograms},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The non invasive analysis of atrial fibrillation (AF) arrhythmia represents a challenge nowadays. The fibrillatory pattern of AF, known as f-wave, is partially masked by the ventricular activity of the heartbeat in the surface electrocardiogram (ECG). Classical techniques aiming to extract the f-wave are based on average beat subtraction (ABS) or blind source separation (BSS). They present limitations in performance and require long ECG records as well as multi-channel records in the case of BSS. The originality of the present work consists in exploiting the sparsity of the atrial activity (AA) signal in the frequency domain to extract the fullf-wave using a recent data acquisition technique called compressed sensing (CS). The present contribution takes a step forward in the extraction of the f-wave by exploiting the time rather than the space dimension. We intend to recover the AA signal with a variant of CS where classical random sampling is replaced by a block sampling scheme. Our breakthrough finding consists in the ability of our method to accurately extract the AA from a short ECG record of just one heartbeat, with a normalized mean squared error of 15%, which is unfeasible with ABS, BSS and other variants that require longer observation windows.},\n  keywords = {blind source separation;compressed sensing;electrocardiography;mean square error methods;medical signal processing;surface electrocardiogram;noninvasive analysis;atrial fibrillation arrhythmia;ventricular activity;heartbeat;ABS;blind source separation;BSS;long ECG records;multichannel records;atrial activity signal;frequency domain;fullf-wave;data acquisition technique;compressed sensing;AA signal;classical random sampling;block sampling scheme;short ECG record;atrial fibrillation patterns;Electrocardiography;FAA;Heart beat;Signal processing;Compressed sensing;Surface waves;Europe},\n  doi = {10.23919/EUSIPCO.2019.8902596},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533714.pdf},\n}\n\n
\n
\n\n\n
\n The non invasive analysis of atrial fibrillation (AF) arrhythmia represents a challenge nowadays. The fibrillatory pattern of AF, known as f-wave, is partially masked by the ventricular activity of the heartbeat in the surface electrocardiogram (ECG). Classical techniques aiming to extract the f-wave are based on average beat subtraction (ABS) or blind source separation (BSS). They present limitations in performance and require long ECG records as well as multi-channel records in the case of BSS. The originality of the present work consists in exploiting the sparsity of the atrial activity (AA) signal in the frequency domain to extract the fullf-wave using a recent data acquisition technique called compressed sensing (CS). The present contribution takes a step forward in the extraction of the f-wave by exploiting the time rather than the space dimension. We intend to recover the AA signal with a variant of CS where classical random sampling is replaced by a block sampling scheme. Our breakthrough finding consists in the ability of our method to accurately extract the AA from a short ECG record of just one heartbeat, with a normalized mean squared error of 15%, which is unfeasible with ABS, BSS and other variants that require longer observation windows.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimization of the Radio Access to Provide Vehicular Communications Based on Drive Tests.\n \n \n \n \n\n\n \n Polegre, A. Á.; Leal, R. P.; García, J. A. G.; and Armada, A. G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OptimizationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902598,\n  author = {A. Á. Polegre and R. P. Leal and J. A. G. García and A. G. Armada},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Optimization of the Radio Access to Provide Vehicular Communications Based on Drive Tests},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Cellular radio access networks provide a great flexibility in terms of the number of possible transmission modes and signal formats. This flexibility is increased for 5G, whose field test trails and deployment are forthcoming, with several numerologies, new frequencies and higher bandwidth. With the aim of providing some understanding about the best options to provide a certain coverage and quality of service, we have carried out real environment measurements through drive tests performed in a 4G network. After validating and adjusting the channel model, some of the new features of the 5G New Radio have been included for comparison purposes. In this paper we present the measurements, validation process and results that offer some insights on how this new technology will perform.},\n  keywords = {4G mobile communication;5G mobile communication;cellular radio;quality of service;radio access networks;5G New Radio;vehicular communications;drive tests;cellular radio access networks;transmission modes;signal formats;field test;quality of service;environment measurements;radio access optimization;4G network;5G mobile communication;Long Term Evolution;Three-dimensional displays;Channel models;Testing;Interference;Frequency measurement;Drive tests;LTE-A;5G NR;path loss;RSSI;throughput},\n  doi = {10.23919/EUSIPCO.2019.8902598},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533858.pdf},\n}\n\n
\n
\n\n\n
\n Cellular radio access networks provide a great flexibility in terms of the number of possible transmission modes and signal formats. This flexibility is increased for 5G, whose field test trails and deployment are forthcoming, with several numerologies, new frequencies and higher bandwidth. With the aim of providing some understanding about the best options to provide a certain coverage and quality of service, we have carried out real environment measurements through drive tests performed in a 4G network. After validating and adjusting the channel model, some of the new features of the 5G New Radio have been included for comparison purposes. In this paper we present the measurements, validation process and results that offer some insights on how this new technology will perform.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Parameter Estimation of Heavy-Tailed AR(p) Model from Incomplete Data.\n \n \n \n \n\n\n \n Liu, J.; Kumar, S.; and Palomar, D. P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ParameterPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902599,\n  author = {J. Liu and S. Kumar and D. P. Palomar},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Parameter Estimation of Heavy-Tailed AR(p) Model from Incomplete Data},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The autoregressive (AR) model is a widely used model to represent the time series data from numerous applications, for example, financial time series, DNA microarray data, etc. In all such applications, issues with missing values frequently occur in the data observation or recording process. Traditionally, the parameter estimation for AR models of order p (AR(p)), from data with missing values has been considered under the Gaussian innovation assumption, and there does not exist any work addressing the issue of missing data for the heavy-tailed AR(p) model. This paper proposes an efficient framework for the parameter estimation from incomplete heavy-tailed AR(p) time series based on the stochastic approximation expectation maximization (SAEM) coupled with a Markov Chain Monte Carlo (MCMC) procedure. The proposed algorithm is computationally cheap and easy to implement. Simulation results demonstrate the efficacy of the proposed framework.},\n  keywords = {autoregressive processes;Bayes methods;expectation-maximisation algorithm;Gaussian processes;Markov processes;Monte Carlo methods;time series;parameter estimation;heavy-tailed model;incomplete data;autoregressive model;time series data;financial time series;DNA microarray data;data observation;AR models;Gaussian innovation assumption;stochastic approximation expectation maximization;SAEM;MCMC procedure;Markov Chain Monte Carlo procedure;Time series analysis;Data models;Parameter estimation;Signal processing algorithms;Technological innovation;Approximation algorithms;Computational modeling;AR model;heavy-tail;missing values;stochastic EM;MCMC},\n  doi = {10.23919/EUSIPCO.2019.8902599},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533469.pdf},\n}\n\n
\n
\n\n\n
\n The autoregressive (AR) model is a widely used model to represent the time series data from numerous applications, for example, financial time series, DNA microarray data, etc. In all such applications, issues with missing values frequently occur in the data observation or recording process. Traditionally, the parameter estimation for AR models of order p (AR(p)), from data with missing values has been considered under the Gaussian innovation assumption, and there does not exist any work addressing the issue of missing data for the heavy-tailed AR(p) model. This paper proposes an efficient framework for the parameter estimation from incomplete heavy-tailed AR(p) time series based on the stochastic approximation expectation maximization (SAEM) coupled with a Markov Chain Monte Carlo (MCMC) procedure. The proposed algorithm is computationally cheap and easy to implement. Simulation results demonstrate the efficacy of the proposed framework.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Wireless Multi-group Multicast Precoding with Selective RF Energy Harvesting.\n \n \n \n \n\n\n \n Gautam, S.; Lagunas, E.; Chatzinotas, S.; and Ottersten, B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"WirelessPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902601,\n  author = {S. Gautam and E. Lagunas and S. Chatzinotas and B. Ottersten},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Wireless Multi-group Multicast Precoding with Selective RF Energy Harvesting},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We present a novel framework for multi-group multicast precoding in the presence of three types of wireless users which are distributed among various multicast groups. A multi-antenna transmitter conveys information and/or energy to the groups of corresponding receivers using more than one multicast streams. The information specific users have conventional receiver architectures to process data, energy harvesting users collect energy using the non-linear energy harvesting module and each of the joint information decoding and energy harvesting capable user is assumed to employ the separated architecture with disparate non-linear energy harvesting and conventional information decoding units. In this context, we formulate and analyze the problem of total transmit power minimization for optimal precoder design subjected to minimum signal-to-interference-and-noise ratio and harvested energy demands at the respective users under three different scenarios. This problem is solved via semi-definite relaxation and the advantages of employing separate information and energy precoders are shown over joint and per-user information and energy precoder designs. Simulation results illustrate the benefits of proposed framework under several operating conditions and parameter values.},\n  keywords = {antenna arrays;energy harvesting;MIMO communication;multicast communication;optimisation;precoding;multiantenna transmitter;multicast streams;information specific users;conventional receiver architectures;energy harvesting users;nonlinear energy harvesting module;joint information decoding;conventional information decoding units;total transmit power minimization;optimal precoder design;energy demands;respective users;per-user information;wireless multigroup multicast precoding;selective RF energy harvesting;wireless users;minimum signal-to-interference-and-noise ratio;semi-definite relaxation;Precoding;Receivers;Energy harvesting;Decoding;Minimization;MISO communication;Transmitting antennas},\n  doi = {10.23919/EUSIPCO.2019.8902601},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534141.pdf},\n}\n\n
\n
\n\n\n
\n We present a novel framework for multi-group multicast precoding in the presence of three types of wireless users which are distributed among various multicast groups. A multi-antenna transmitter conveys information and/or energy to the groups of corresponding receivers using more than one multicast streams. The information specific users have conventional receiver architectures to process data, energy harvesting users collect energy using the non-linear energy harvesting module and each of the joint information decoding and energy harvesting capable user is assumed to employ the separated architecture with disparate non-linear energy harvesting and conventional information decoding units. In this context, we formulate and analyze the problem of total transmit power minimization for optimal precoder design subjected to minimum signal-to-interference-and-noise ratio and harvested energy demands at the respective users under three different scenarios. This problem is solved via semi-definite relaxation and the advantages of employing separate information and energy precoders are shown over joint and per-user information and energy precoder designs. Simulation results illustrate the benefits of proposed framework under several operating conditions and parameter values.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dual-threshold Based Local Patch Construction Method for Manifold Approximation And Its Application to Facial Expression Analysis.\n \n \n \n \n\n\n \n Happy, S. L.; Dantcheva, A.; and Routray, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Dual-thresholdPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902603,\n  author = {S. L. Happy and A. Dantcheva and A. Routray},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Dual-threshold Based Local Patch Construction Method for Manifold Approximation And Its Application to Facial Expression Analysis},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we propose a manifold based facial expression recognition framework which utilizes the intrinsic structure of the data distribution to accurately classify the expression categories. Specifically, we model the expressive faces as the points on linear subspaces embedded in a Grassmannian manifold, also called as expression manifold. We propose the dual-threshold based local patch (DTLP) extraction method for constructing the local subspaces, which in turn approximates the expression manifold. Further, we use the affinity of the face points from the subspaces for classifying them into different expression classes. Our method is evaluated on four publicly available databases with two well known feature extraction techniques. It is evident from the results that the proposed method efficiently models the expression manifold and improves the recognition accuracy in spite of the simplicity of the facial representatives.},\n  keywords = {approximation theory;emotion recognition;face recognition;feature extraction;manifold approximation;facial expression recognition framework;expression categories;expressive faces;Grassmannian manifold;expression manifold;dual-threshold based local patch extraction method;local subspaces;expression classes;Manifolds;Faces;Linearity;Feature extraction;Face recognition;Facial features;Iron;Facial expression analysis;manifold approximation;point to subspace distance},\n  doi = {10.23919/EUSIPCO.2019.8902603},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531930.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a manifold based facial expression recognition framework which utilizes the intrinsic structure of the data distribution to accurately classify the expression categories. Specifically, we model the expressive faces as the points on linear subspaces embedded in a Grassmannian manifold, also called as expression manifold. We propose the dual-threshold based local patch (DTLP) extraction method for constructing the local subspaces, which in turn approximates the expression manifold. Further, we use the affinity of the face points from the subspaces for classifying them into different expression classes. Our method is evaluated on four publicly available databases with two well known feature extraction techniques. It is evident from the results that the proposed method efficiently models the expression manifold and improves the recognition accuracy in spite of the simplicity of the facial representatives.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Environment for Gestural Interaction with 3D Virtual Musical Instruments as an Educational Tool.\n \n \n \n \n\n\n \n Garoufis, C.; Zlatintsi, A.; Kritsis, K.; Filntisis, P. P.; Katsouros, V.; and Maragos, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902604,\n  author = {C. Garoufis and A. Zlatintsi and K. Kritsis and P. P. Filntisis and V. Katsouros and P. Maragos},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {An Environment for Gestural Interaction with 3D Virtual Musical Instruments as an Educational Tool},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper presents a finalized version of an environment intended for performance and gestural interaction with three-dimensional virtual musical instruments, developed as a part of a larger educational platform, the iMuSciCA workbench. The environment can employ either a Leap Motion or a Kinect sensor, and enables interaction with a variety of virtual musical instruments, namely virtual interpretations of a bichord, a xylophone, a drumming set, a guitar and an upright bass, by means of performing and recognizing hand gestures similar to the ones needed to play their physical counterparts. In order to showcase the usability of the platform in an educational context and measure its effectiveness, we designed a scenario, where the user tries to keep a steady rhythm while drumming. A usability study of the above scenario, involving 22 users, demonstrates that the audiovisual feedback can actually provide assistance to the user.},\n  keywords = {computer aided instruction;gesture recognition;human computer interaction;music;musical instruments;virtual reality;gestural interaction;three-dimensional virtual musical instruments;educational platform;virtual interpretations;hand gesture recognition;educational tool;iMuSciCA workbench;Leap Motion;Kinect sensor;Instruments;Music;Three-dimensional displays;Engines;Tools;Usability;Visualization;virtual musical instruments;gesture recognition;gestural interaction;educational tool;HCI},\n  doi = {10.23919/EUSIPCO.2019.8902604},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533729.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a finalized version of an environment intended for performance and gestural interaction with three-dimensional virtual musical instruments, developed as a part of a larger educational platform, the iMuSciCA workbench. The environment can employ either a Leap Motion or a Kinect sensor, and enables interaction with a variety of virtual musical instruments, namely virtual interpretations of a bichord, a xylophone, a drumming set, a guitar and an upright bass, by means of performing and recognizing hand gestures similar to the ones needed to play their physical counterparts. In order to showcase the usability of the platform in an educational context and measure its effectiveness, we designed a scenario, where the user tries to keep a steady rhythm while drumming. A usability study of the above scenario, involving 22 users, demonstrates that the audiovisual feedback can actually provide assistance to the user.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multiple k-Means Clustering Based Locally Low-Rank Approach to Nonlinear Matrix Completion.\n \n \n \n \n\n\n \n Konishi, K.; Shise, T.; Sasaki, R.; and Furukawa, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"MultiplePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902605,\n  author = {K. Konishi and T. Shise and R. Sasaki and T. Furukawa},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Multiple k-Means Clustering Based Locally Low-Rank Approach to Nonlinear Matrix Completion},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper deals with nonlinear matrix completion problem, which is a problem of restoring missing entries in a given matrix, where its column vectors belong to a low dimensional manifold. Assuming that a low dimensional manifold can be approximated locally as a low dimensional linear subspace, this paper proposes a new locally low-rank approach. In this approach iteratively solves low-rank matrix completion problems for submatrices generated by using the k-means clustering for several values of k and restores missing entries. Numerical examples show that the proposed algorithm achieves better performance than other algorithms.},\n  keywords = {approximation theory;iterative methods;matrix algebra;pattern clustering;k-means clustering;nonlinear matrix completion problem;low dimensional manifold;low dimensional linear subspace;low-rank matrix completion problems;Signal processing algorithms;Clustering algorithms;Image restoration;Manifolds;Approximation algorithms;Minimization;Matrix converters;matrix completion;matrix rank minimization;nuclear norm minimization;compressed sensing},\n  doi = {10.23919/EUSIPCO.2019.8902605},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533481.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with nonlinear matrix completion problem, which is a problem of restoring missing entries in a given matrix, where its column vectors belong to a low dimensional manifold. Assuming that a low dimensional manifold can be approximated locally as a low dimensional linear subspace, this paper proposes a new locally low-rank approach. In this approach iteratively solves low-rank matrix completion problems for submatrices generated by using the k-means clustering for several values of k and restores missing entries. Numerical examples show that the proposed algorithm achieves better performance than other algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Derivation of Respiratory Effort from Photoplethysmography.\n \n \n \n \n\n\n \n Jarchi, D.; and Sanei, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DerivationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902606,\n  author = {D. Jarchi and S. Sanei},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Derivation of Respiratory Effort from Photoplethysmography},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, a new and non-invasive method has been proposed to retrieve respiratory effort component from photoplethysmography (PPG) sensor. The PPG signals are recorded using a commercial wrist worn device. Inverse synchrosqueezed wavelet (ISSW) transform has been applied to reconstruct the respiratory component and then to derive respiratory effort component. The reconstructed respiratory component and the respiratory effort signal from PPG are shown to be highly correlated with airflow signal and extracted respiratory effort obtained using a thermocouple secured under nose. The findings in this paper can make significant changes in various applications such as for sleep analysis, where continuous monitoring of respiratory effort from a wrist worn sensor unobtrusively is crucial for identification of various sleep abnormalities such as sleep apnea.},\n  keywords = {biomedical equipment;medical disorders;medical signal processing;patient monitoring;photoplethysmography;pneumodynamics;sleep;wavelet transforms;sleep apnea;thermocouple;inverse synchrosqueezed wavelet transform;wrist worn sensor;respiratory effort signal;respiratory component;commercial wrist worn device;PPG signals;photoplethysmography sensor;respiratory effort component;Time-frequency analysis;Modulation;Indexes;Estimation;Continuous wavelet transforms;respiratory;inverse synchrosqueezed wavelet transform;PPG},\n  doi = {10.23919/EUSIPCO.2019.8902606},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532770.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a new and non-invasive method has been proposed to retrieve respiratory effort component from photoplethysmography (PPG) sensor. The PPG signals are recorded using a commercial wrist worn device. Inverse synchrosqueezed wavelet (ISSW) transform has been applied to reconstruct the respiratory component and then to derive respiratory effort component. The reconstructed respiratory component and the respiratory effort signal from PPG are shown to be highly correlated with airflow signal and extracted respiratory effort obtained using a thermocouple secured under nose. The findings in this paper can make significant changes in various applications such as for sleep analysis, where continuous monitoring of respiratory effort from a wrist worn sensor unobtrusively is crucial for identification of various sleep abnormalities such as sleep apnea.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improved Waveforms for Time-Efficient Radar Range Measurement Disambiguation.\n \n \n \n \n\n\n \n Daniel, A. M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902607,\n  author = {A. M. Daniel},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Improved Waveforms for Time-Efficient Radar Range Measurement Disambiguation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Ambiguity in radar measurements is a well-studied problem, but more recent advances in multifunction phased-array radars motivate the development of disambiguation schemes that operate in a minimal amount of time. To that end, a recent work has developed a disambiguation scheme that can be optimized to minimize dwell time, but uses waveforms that may degrade the probability of false alarm and result in the masking of small targets. In this paper, we develop several waveform design methods, along with the corresponding optimization framework, for the purpose of mitigating these issues in the case where target velocity is small or known in advance. We show that while some improvement is possible with no increase in dwell time using interpulse codes, substantial improvement can be had with only a mild increase in dwell time by adding CPI separation or mismatched filters, and a perfect response can be obtained by using codes with perfect periodic autocorrelations, but with a more substantial increase in dwell time.},\n  keywords = {correlation methods;optimisation;phased array radar;radar signal processing;false alarm;waveform design methods;optimization framework;dwell time;time-efficient radar range measurement disambiguation;multifunction phased-array radars;interpulse codes;CPI separation;mismatched filters;perfect periodic autocorrelations;Optimization;Correlation;Radar measurements;Signal to noise ratio;Europe;Radar measurement ambiguity;Dwell time minimization;Interpulse codes;Sidelobe ratios;Optimization},\n  doi = {10.23919/EUSIPCO.2019.8902607},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570527198.pdf},\n}\n\n
\n
\n\n\n
\n Ambiguity in radar measurements is a well-studied problem, but more recent advances in multifunction phased-array radars motivate the development of disambiguation schemes that operate in a minimal amount of time. To that end, a recent work has developed a disambiguation scheme that can be optimized to minimize dwell time, but uses waveforms that may degrade the probability of false alarm and result in the masking of small targets. In this paper, we develop several waveform design methods, along with the corresponding optimization framework, for the purpose of mitigating these issues in the case where target velocity is small or known in advance. We show that while some improvement is possible with no increase in dwell time using interpulse codes, substantial improvement can be had with only a mild increase in dwell time by adding CPI separation or mismatched filters, and a perfect response can be obtained by using codes with perfect periodic autocorrelations, but with a more substantial increase in dwell time.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Acoustic Simulation in Dynamic Environments for Robot Audition.\n \n \n \n \n\n\n \n Zhang, Z.; Nakadai, K.; Nakajima, H.; and Sumida, N.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AcousticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902609,\n  author = {Z. Zhang and K. Nakadai and H. Nakajima and N. Sumida},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Acoustic Simulation in Dynamic Environments for Robot Audition},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper addresses acoustic simulation in dynamic environments for robot audition. For such environments, we consider three cases, that is, a moving microphone, a moving sound source, and a combination of the two. The proposed method simulates a dynamic environment by assuming that a motion trajectory of a microphone and/or a speaker can be discretized. We validated the proposed method through the accuracy of the simulated signals in terms of frequency and volume, and the performance of automatic speech recognition (ASR) with an acoustic model trained by simulated speech signals. The experimental results showed that the proposed method can simulate the sound properties of volume and frequency in dynamic environments well. The performance of ASR is improved with the acoustic model trained with the simulated speech signals.},\n  keywords = {acoustic signal processing;microphones;robots;speech recognition;dynamic environment;acoustic model;simulated speech signals;acoustic simulation;robot audition;moving microphone;moving sound source;automatic speech recognition;Microphones;Acoustics;Trajectory;Dynamics;Robots;Signal processing;Doppler effect;acoustic simulation;robot audition;moving sound source;moving microphone;dynamic environment;robust automatic speech recognition},\n  doi = {10.23919/EUSIPCO.2019.8902609},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528646.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses acoustic simulation in dynamic environments for robot audition. For such environments, we consider three cases, that is, a moving microphone, a moving sound source, and a combination of the two. The proposed method simulates a dynamic environment by assuming that a motion trajectory of a microphone and/or a speaker can be discretized. We validated the proposed method through the accuracy of the simulated signals in terms of frequency and volume, and the performance of automatic speech recognition (ASR) with an acoustic model trained by simulated speech signals. The experimental results showed that the proposed method can simulate the sound properties of volume and frequency in dynamic environments well. The performance of ASR is improved with the acoustic model trained with the simulated speech signals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Patch-Based Colour Transfer with Optimal Transport.\n \n \n \n \n\n\n \n Alghamdi, H.; Grogan, M.; and Dahyot, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Patch-BasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902611,\n  author = {H. Alghamdi and M. Grogan and R. Dahyot},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Patch-Based Colour Transfer with Optimal Transport},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper proposes a new colour transfer method with Optimal transport to transfer the colour of a source image to match the colour of a target image of the same scene. We propose to formulate the problem in higher dimensional spaces (than colour spaces) by encoding overlapping neighborhoods of pixels containing colour information as well as spatial information. Since several recoloured candidates are now generated for each pixel in the source image, we define an original procedure to efficiently merge these candidates which allows denoising and artifact removal as well as colour transfer. Experiments show quantitative and qualitative improvements over previous colour transfer methods. Our method can be applied to different contexts of colour transfer such as transferring colour between different camera models, camera settings, illumination conditions and colour retouch styles for photographs.},\n  keywords = {cameras;filtering theory;image colour analysis;image enhancement;image matching;patch-based colour transfer;Optimal transport;colour transfer method;source image;target image;higher dimensional spaces;colour spaces;colour information;Image color analysis;Signal processing algorithms;Two dimensional displays;Cameras;Histograms;Measurement;Europe;optimal transport;colour transfer;image enhancement;JPEG compression blocks},\n  doi = {10.23919/EUSIPCO.2019.8902611},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533179.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a new colour transfer method with Optimal transport to transfer the colour of a source image to match the colour of a target image of the same scene. We propose to formulate the problem in higher dimensional spaces (than colour spaces) by encoding overlapping neighborhoods of pixels containing colour information as well as spatial information. Since several recoloured candidates are now generated for each pixel in the source image, we define an original procedure to efficiently merge these candidates which allows denoising and artifact removal as well as colour transfer. Experiments show quantitative and qualitative improvements over previous colour transfer methods. Our method can be applied to different contexts of colour transfer such as transferring colour between different camera models, camera settings, illumination conditions and colour retouch styles for photographs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Subjective Evaluation of Light Field Image Compression Methods based on View Synthesis.\n \n \n \n \n\n\n \n Bakir, N.; Fezza, S. A.; Hamidouche, W.; Samrouth, K.; and Déforges, O.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SubjectivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902614,\n  author = {N. Bakir and S. A. Fezza and W. Hamidouche and K. Samrouth and O. Déforges},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Subjective Evaluation of Light Field Image Compression Methods based on View Synthesis},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Light field (LF) images provide rich visual information enabling amazing applications, from post-capture image processing to immersive applications. However, this rich information requires significant storage and bandwidth capabilities thus urgently raises the question of their compression. Many studies have investigated the compression of LF images using both spatial and angular redundancies existing in the LF images. Recently, interesting LF compression approaches based on view synthesis technique have been proposed. In these approaches, only sparse samples of LF views are encoded and transmitted, while the other views are synthesized at decoder side. Different techniques have been proposed to synthesize the dropped views. In this paper, we describe subjective quality evaluation of two recent compression methods based on view synthesis and comparing them to two pseudo-video sequence based coding approaches. Results show that view synthesis based approaches provide higher visual quality than the naive LF coding approaches. In addition, the database as well as subjective scores are publicly available to help designing new objective metrics or can be used as a benchmark for future development of LF coding methods.},\n  keywords = {data compression;image coding;image sequences;dropped views;subjective quality evaluation;compression methods;pseudovideo sequence based coding approaches;view synthesis based approaches;visual quality;subjective scores;LF coding methods;light field image compression methods;visual information;post-capture image processing;immersive applications;bandwidth capabilities;LF images;spatial redundancies;angular redundancies;view synthesis technique;LF views;LF compression approaches;Image coding;Encoding;Visualization;Decoding;Standards;Image color analysis;Europe;Light field;Image compression;View synthesis;Subjective evaluation;CNN;Linear approximation},\n  doi = {10.23919/EUSIPCO.2019.8902614},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533939.pdf},\n}\n\n
\n
\n\n\n
\n Light field (LF) images provide rich visual information enabling amazing applications, from post-capture image processing to immersive applications. However, this rich information requires significant storage and bandwidth capabilities thus urgently raises the question of their compression. Many studies have investigated the compression of LF images using both spatial and angular redundancies existing in the LF images. Recently, interesting LF compression approaches based on view synthesis technique have been proposed. In these approaches, only sparse samples of LF views are encoded and transmitted, while the other views are synthesized at decoder side. Different techniques have been proposed to synthesize the dropped views. In this paper, we describe subjective quality evaluation of two recent compression methods based on view synthesis and comparing them to two pseudo-video sequence based coding approaches. Results show that view synthesis based approaches provide higher visual quality than the naive LF coding approaches. In addition, the database as well as subjective scores are publicly available to help designing new objective metrics or can be used as a benchmark for future development of LF coding methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Computer Aid-System to Identify the First Stage of Prostate Cancer Through Deep-Learning Techniques.\n \n \n \n \n\n\n \n García, J. G.; Colomer, A.; López-Mir, F.; Mossi, J. M.; and Naranjo, V.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ComputerPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902615,\n  author = {J. G. García and A. Colomer and F. López-Mir and J. M. Mossi and V. Naranjo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Computer Aid-System to Identify the First Stage of Prostate Cancer Through Deep-Learning Techniques},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Nowadays, there are high rates of discordance between pathologists when they analyse the biopsy samples to diagnose prostate cancer according to the Gleason scale. Thus, we designed a computer-aid system capable of accurately differentiating between normal tissues and pathological ones at the first stage. Specifically, we made use of an original segmentation algorithm to identify regions of interest and distinguish from them between artefacts (false glands), benign glands and Gleason grade 3 glands. Regarding the building of predictive models, we applied, for the first time, deep-learning algorithms on the previously segmented gland candidates. We compared the results reported by two different convolutional neural networks (CNNs) addressed with distinct classification strategies. The best model reached a multi-class classification accuracy of 0.812±0.033, after performing an in-depth data partitioning per medical history.},\n  keywords = {biological organs;biomedical optical imaging;cancer;image classification;image segmentation;learning (artificial intelligence);medical image processing;neural nets;computer aid-system;prostate cancer;deep-learning techniques;biopsy samples;Gleason scale;computer-aid system;normal tissues;pathological ones;original segmentation algorithm;artefacts;false glands;benign glands;segmented gland candidates;deep-learning algorithms;Gleason grade 3 glands;Glands;Image segmentation;Signal processing algorithms;Pathology;Image color analysis;Prostate cancer;Predictive models;Convolutional neural networks;gland segmentation and classification;histological image;prostate cancer},\n  doi = {10.23919/EUSIPCO.2019.8902615},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533819.pdf},\n}\n\n
\n
\n\n\n
\n Nowadays, there are high rates of discordance between pathologists when they analyse the biopsy samples to diagnose prostate cancer according to the Gleason scale. Thus, we designed a computer-aid system capable of accurately differentiating between normal tissues and pathological ones at the first stage. Specifically, we made use of an original segmentation algorithm to identify regions of interest and distinguish from them between artefacts (false glands), benign glands and Gleason grade 3 glands. Regarding the building of predictive models, we applied, for the first time, deep-learning algorithms on the previously segmented gland candidates. We compared the results reported by two different convolutional neural networks (CNNs) addressed with distinct classification strategies. The best model reached a multi-class classification accuracy of 0.812±0.033, after performing an in-depth data partitioning per medical history.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Infinite Impulse Response Echo Canceller in STFT Domain for Reverberant Environments.\n \n \n \n \n\n\n \n Schreibman, A.; and Markovich-Golan, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"InfinitePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902618,\n  author = {A. Schreibman and S. Markovich-Golan},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Infinite Impulse Response Echo Canceller in STFT Domain for Reverberant Environments},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Acoustic echo cancellation and system identification in reverberant environments have been thoroughly studied in the literature. Theoretically, in a reverberant environment the Acoustic Impulse Response (AIR) relating the loudspeaker signal, denoted reference, with the corresponding signal component at the microphone, denoted echo, is of an infinite length and can be modeled as an Infinite Impulse Response (IIR) filter. Correspondingly, the echo signal can be modeled as an Auto Regressive Moving Average (ARMA) process. Yet, most methods for this problem adopt a Finite Impulse Response (FIR) system model or equivalently a Moving Average (MA) echo signal model due to their favorable simplicity and stability. Latter methods, denoted FIR-Acoustic Echo Canceller (AEC), employ an Adaptive Filter (AF) for tracking a possibly time-varying system and cancelling echo. Some contributions adopt an IIR system model and utilize it to derive a time-domain AEC and accurately analyze the room behaviour. An IIR system model has also been successfully applied in the Short Time Fourier Transform (STFT) domain for the dereverberation problem.In this contribution we consider an IIR model in the STFT domain and propose a novel online AEC algorithm, denoted IIR-AEC, which tracks the model parameters and cancels echo. The order of the feed-back filter, equivalent to the order of the Auto Regressive (AR) part of the echo signal model, can be designed to fit the acoustic model and the order of the feed-forward filter, equivalent to the order of the MA part of the echo signal model, is limited to a single tap, thereby requiring that the STFT window is longer than the early part of the AIR. The computational complexity of proposed IIR-AEC is comparable to a Recursive Least Squares (RLS) implementation of FIRAEC. These methods are evaluated using real measured AIRs drawn from a recording campaign and the IIR-AEC is shown to outperform the FIR-AEC.},\n  keywords = {acoustic signal processing;adaptive filters;autoregressive moving average processes;computational complexity;echo suppression;FIR filters;Fourier transforms;IIR filters;least squares approximations;loudspeakers;microphones;reverberation;time-varying systems;transient response;infinite impulse response echo canceller;auto regressive moving average process;finite impulse response system model;FIR-acoustic echo canceller;IIR-AEC;acoustic model;model parameters;IIR model;short time Fourier transform domain;time-domain AEC;IIR system model;possibly time-varying system;echo signal model;Infinite Impulse Response filter;denoted echo;acoustic impulse response;system identification;reverberant environment;STFT domain;Atmospheric modeling;Microphones;Echo cancellers;Adaptation models;Time-domain analysis;Loudspeakers},\n  doi = {10.23919/EUSIPCO.2019.8902618},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529135.pdf},\n}\n\n
\n
\n\n\n
\n Acoustic echo cancellation and system identification in reverberant environments have been thoroughly studied in the literature. Theoretically, in a reverberant environment the Acoustic Impulse Response (AIR) relating the loudspeaker signal, denoted reference, with the corresponding signal component at the microphone, denoted echo, is of an infinite length and can be modeled as an Infinite Impulse Response (IIR) filter. Correspondingly, the echo signal can be modeled as an Auto Regressive Moving Average (ARMA) process. Yet, most methods for this problem adopt a Finite Impulse Response (FIR) system model or equivalently a Moving Average (MA) echo signal model due to their favorable simplicity and stability. Latter methods, denoted FIR-Acoustic Echo Canceller (AEC), employ an Adaptive Filter (AF) for tracking a possibly time-varying system and cancelling echo. Some contributions adopt an IIR system model and utilize it to derive a time-domain AEC and accurately analyze the room behaviour. An IIR system model has also been successfully applied in the Short Time Fourier Transform (STFT) domain for the dereverberation problem.In this contribution we consider an IIR model in the STFT domain and propose a novel online AEC algorithm, denoted IIR-AEC, which tracks the model parameters and cancels echo. The order of the feed-back filter, equivalent to the order of the Auto Regressive (AR) part of the echo signal model, can be designed to fit the acoustic model and the order of the feed-forward filter, equivalent to the order of the MA part of the echo signal model, is limited to a single tap, thereby requiring that the STFT window is longer than the early part of the AIR. The computational complexity of proposed IIR-AEC is comparable to a Recursive Least Squares (RLS) implementation of FIRAEC. These methods are evaluated using real measured AIRs drawn from a recording campaign and the IIR-AEC is shown to outperform the FIR-AEC.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Convex Optimization Based Sparse Learning Over Networks.\n \n \n \n \n\n\n \n Zaki, A.; and Chatterjee, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ConvexPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902625,\n  author = {A. Zaki and S. Chatterjee},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Convex Optimization Based Sparse Learning Over Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we consider the problem of estimating a sparse signal over a network. The main interest is to save communication resource for information exchange over the network and hence reduce processing time. With this aim, we develop a distributed learning algorithm where each node of the network uses a locally optimized convex optimization based algorithm. The nodes iteratively exchange their signal estimates over the network to refine the local estimates. The convex cost is constructed to promote sparsity as well as to include influence of estimates from the neighboring nodes. We provide a restricted isometry property (RIP)-based theoretical guarantee on the estimation quality of the proposed algorithm. Using simulations, we show that the algorithm provides competitive performance vis-a-vis a globally optimum distributed LASSO algorithm, both in convergence speed and estimation error.},\n  keywords = {computer networks;convex programming;distributed algorithms;iterative methods;learning (artificial intelligence);optimisation;signal processing;sparse matrices;sparse signal;information exchange;distributed learning algorithm;locally optimized convex optimization based algorithm;signal estimates;neighboring nodes;globally optimum distributed LASSO algorithm;estimation error;convex optimization based sparse learning;isometry property-based theoretical guarantee;restricted isometry property;RIP-based theoretical guarantee;Signal processing algorithms;Convex functions;Sparse matrices;Estimation error;Signal to noise ratio;Convergence;Sparse learning;convex optimization;greedy algorithms;restricted isometry property},\n  doi = {10.23919/EUSIPCO.2019.8902625},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534070.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we consider the problem of estimating a sparse signal over a network. The main interest is to save communication resource for information exchange over the network and hence reduce processing time. With this aim, we develop a distributed learning algorithm where each node of the network uses a locally optimized convex optimization based algorithm. The nodes iteratively exchange their signal estimates over the network to refine the local estimates. The convex cost is constructed to promote sparsity as well as to include influence of estimates from the neighboring nodes. We provide a restricted isometry property (RIP)-based theoretical guarantee on the estimation quality of the proposed algorithm. Using simulations, we show that the algorithm provides competitive performance vis-a-vis a globally optimum distributed LASSO algorithm, both in convergence speed and estimation error.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-Rank Approximation via the Generalized Reweighted Iterative Nuclear Norm.\n \n \n \n \n\n\n \n Huang, Y.; Lan, L.; and Zhang, L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Low-RankPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902626,\n  author = {Y. Huang and L. Lan and L. Zhang},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Low-Rank Approximation via the Generalized Reweighted Iterative Nuclear Norm},\n  year = {2019},\n  pages = {1-5},\n  abstract = {low-rank approximation problem has recently attracted wide concern due to its excellent performance in realworld applications such as image restoration, traffic monitoring, and face recognition. Compared with the classic nuclear norm, the Schatten-p norm is stated to be a closer approximation to restrain the singular values for practical applications in the real world. However, Schatten-p norm minimization is a challenging non-convex, non-smooth, and non-Lipschitz problem. In this paper, inspired by the reweighted l1 norm for compressive sensing, the generalized iterative reweighted nuclear norm (GIRNN) algorithm is proposed to approximate Schatten-p norm minimization. By involving the proposed algorithms, the problem becomes more tractable and the closed solutions are derived from the iteratively reweighted subproblems. Numerical experiments for the practical matrix completion (MC) problem and robust principal component analysis (RPCA) problem are illustrated to validate the superior performance of both algorithms over some common state-of-the-art methods.},\n  keywords = {approximation theory;concave programming;iterative methods;matrix algebra;minimisation;principal component analysis;generalized reweighted iterative nuclear norm;low-rank approximation problem;image restoration;traffic monitoring;classic nuclear norm;closer approximation;nonLipschitz problem;nuclear norm algorithm;approximate Schatten-p norm minimization;iteratively reweighted subproblems;matrix completion problem;robust principal component analysis problem;Signal processing algorithms;Minimization;Approximation algorithms;Principal component analysis;Sparse matrices;Europe;Signal processing;Low-rank approximation problem;matrix completion (MC);robust principal component analysis (RPCA);generalized iterative reweighted nuclear norm (GIRNN)},\n  doi = {10.23919/EUSIPCO.2019.8902626},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534401.pdf},\n}\n\n
\n
\n\n\n
\n low-rank approximation problem has recently attracted wide concern due to its excellent performance in realworld applications such as image restoration, traffic monitoring, and face recognition. Compared with the classic nuclear norm, the Schatten-p norm is stated to be a closer approximation to restrain the singular values for practical applications in the real world. However, Schatten-p norm minimization is a challenging non-convex, non-smooth, and non-Lipschitz problem. In this paper, inspired by the reweighted l1 norm for compressive sensing, the generalized iterative reweighted nuclear norm (GIRNN) algorithm is proposed to approximate Schatten-p norm minimization. By involving the proposed algorithms, the problem becomes more tractable and the closed solutions are derived from the iteratively reweighted subproblems. Numerical experiments for the practical matrix completion (MC) problem and robust principal component analysis (RPCA) problem are illustrated to validate the superior performance of both algorithms over some common state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n ON-IN: An On-Node and In-Node Based Mechanism for Big Data Collection in Large-Scale Sensor Networks.\n \n \n \n \n\n\n \n Ibrahim, M.; Harb, H.; Nasser, A.; Mansour, A.; and Osswald, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ON-IN:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902628,\n  author = {M. Ibrahim and H. Harb and A. Nasser and A. Mansour and C. Osswald},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {ON-IN: An On-Node and In-Node Based Mechanism for Big Data Collection in Large-Scale Sensor Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Nowadays, data are collected everywhere from searches on Google to posts on social media. Thus, the era of big data is started. Among many feasible sources, Wireless Sensor Network (WSN) becomes one of the vibrant big data sources where a huge volume of data is generated from various sensor nodes in large-scale networks. Compared to traditional networks, WSN faces serious challenges especially in data management and conserving sensor energies. In this work, we propose a novel two phases big data processing mechanism, called ONIN: on-node and in-node (between nodes). In the first phase, we introduce the Newton's forward difference method to reduce the amount of data generated at each sensor node. Meanwhile, in the second phase we perform a clustering technique, i.e. PKmeans (Pattern-Kmeans) algorithm, and aim to reduce the redundancy among data generated by neighboring nodes. Through both simulations and experiments on real telosB motes, we evaluated the efficiency of our proposed mechanism in terms of reducing data transmission and conserving sensor energies, compared to other existing techniques.},\n  keywords = {Big Data;Newton method;pattern clustering;wireless sensor networks;neighboring nodes;data transmission;conserving sensor energies;in-node based mechanism;big data collection;large-scale sensor networks;social media;wireless sensor network;WSN;vibrant big data sources;sensor node;large-scale networks;data management;phases big data processing mechanism;Newton forward difference method;Wireless sensor networks;Signal processing algorithms;Mathematical model;Data models;Clustering algorithms;Predictive models;Prediction algorithms;Wireless sensor networks;Newton forward difference method;PKmeans;telosB mote;energy conservation},\n  doi = {10.23919/EUSIPCO.2019.8902628},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533852.pdf},\n}\n\n
\n
\n\n\n
\n Nowadays, data are collected everywhere from searches on Google to posts on social media. Thus, the era of big data is started. Among many feasible sources, Wireless Sensor Network (WSN) becomes one of the vibrant big data sources where a huge volume of data is generated from various sensor nodes in large-scale networks. Compared to traditional networks, WSN faces serious challenges especially in data management and conserving sensor energies. In this work, we propose a novel two phases big data processing mechanism, called ONIN: on-node and in-node (between nodes). In the first phase, we introduce the Newton's forward difference method to reduce the amount of data generated at each sensor node. Meanwhile, in the second phase we perform a clustering technique, i.e. PKmeans (Pattern-Kmeans) algorithm, and aim to reduce the redundancy among data generated by neighboring nodes. Through both simulations and experiments on real telosB motes, we evaluated the efficiency of our proposed mechanism in terms of reducing data transmission and conserving sensor energies, compared to other existing techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-Complexity 2-Coordinates Descent for Near-Optimal MMSE Soft-Output Massive MIMO Uplink Data Detection.\n \n \n \n \n\n\n \n Seidel, P.; Paul, S.; and Rust, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Low-ComplexityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902629,\n  author = {P. Seidel and S. Paul and J. Rust},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Low-Complexity 2-Coordinates Descent for Near-Optimal MMSE Soft-Output Massive MIMO Uplink Data Detection},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, a block extended coordinate descent algorithm is introduced for MMSE based soft-output massive MIMO signal detection, which exploits the simple inversion of small sub-Gram matrices to allow a low-complexity implementation. We show that the resulting two-coordinates descent approach has a computational complexity comparable to the original coordinate descent signal detector, whereas the latency bottleneck can be relaxed and further the data detection performance can be improved as the simulation results show. Also we show the possibility to approximate the Gram matrix with fewer multiplications while maintaining a near-optimal detection performance.},\n  keywords = {computational complexity;least mean squares methods;matrix multiplication;MIMO communication;signal detection;low-complexity 2-coordinates descent;near-optimal MMSE soft-output massive MIMO uplink data;descent algorithm;MMSE based soft-output massive MIMO signal detection;simple inversion;sub-Gram matrices;low-complexity implementation;resulting two-coordinates descent approach;computational complexity;original coordinate descent signal detector;data detection performance;near-optimal detection performance;Approximation algorithms;Signal processing algorithms;Computational complexity;Mathematical model;Massive MIMO;Detectors;Massive MIMO;Soft-Output;Signal Detection;Low-Complexity;Matrix Approximation},\n  doi = {10.23919/EUSIPCO.2019.8902629},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529980.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a block extended coordinate descent algorithm is introduced for MMSE based soft-output massive MIMO signal detection, which exploits the simple inversion of small sub-Gram matrices to allow a low-complexity implementation. We show that the resulting two-coordinates descent approach has a computational complexity comparable to the original coordinate descent signal detector, whereas the latency bottleneck can be relaxed and further the data detection performance can be improved as the simulation results show. Also we show the possibility to approximate the Gram matrix with fewer multiplications while maintaining a near-optimal detection performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Black-Box Decision based Adversarial Attack with Symmetric α-stable Distribution.\n \n \n \n\n\n \n Srinivasan, V.; Kuruoglu, E. E.; Müller, K. -.; Samek, W.; and Nakajima, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@InProceedings{8902630,\n  author = {V. Srinivasan and E. E. Kuruoglu and K. -R. Müller and W. Samek and S. Nakajima},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Black-Box Decision based Adversarial Attack with Symmetric α-stable Distribution},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Developing techniques for adversarial attack and defense is an important research field for establishing reliable machine learning and its applications. Many existing methods employ Gaussian random variables for exploring the data space to find the most adversarial (for attacking) or least adversarial (for defense) point. However, the Gaussian distribution is not necessarily the optimal choice when the exploration is required to follow the complicated structure that most real-world data distributions exhibit. In this paper, we investigate how statistics of random variables affect such random walk exploration. Specifically, we generalize the Boundary Attack, a state-of-the-art blackbox decision based attacking strategy, and propose the Le'vy-Attack, where the random walk is driven by symmetric α-stable random variables. Our experiments on MNIST and CIFAR10 datasets show that the Le'vy-Attack explores the image data space more efficiently, and significantly improves the performance. Our results also give an insight into the recently found fact in the whitebox attacking scenario that the choice of the norm for measuring the amplitude of the adversarial patterns is essential.},\n  keywords = {Gaussian distribution;Gaussian processes;learning (artificial intelligence);random processes;Boundary Attack;α-stable random variables;image data space;whitebox attacking scenario;adversarial patterns;black-box;adversarial attack;symmetric α-stable distribution;reliable machine learning;Gaussian random variables;Gaussian distribution;optimal choice;real-world data distributions;random walk exploration;blackbox decision;Machine learning;Random variables;Europe;Signal processing;Gaussian distribution;Training;Perturbation methods;adversarial attack, α;-stable distribution;deep neural networks;image classification.},\n  doi = {10.23919/EUSIPCO.2019.8902630},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Developing techniques for adversarial attack and defense is an important research field for establishing reliable machine learning and its applications. Many existing methods employ Gaussian random variables for exploring the data space to find the most adversarial (for attacking) or least adversarial (for defense) point. However, the Gaussian distribution is not necessarily the optimal choice when the exploration is required to follow the complicated structure that most real-world data distributions exhibit. In this paper, we investigate how statistics of random variables affect such random walk exploration. Specifically, we generalize the Boundary Attack, a state-of-the-art blackbox decision based attacking strategy, and propose the Le'vy-Attack, where the random walk is driven by symmetric α-stable random variables. Our experiments on MNIST and CIFAR10 datasets show that the Le'vy-Attack explores the image data space more efficiently, and significantly improves the performance. Our results also give an insight into the recently found fact in the whitebox attacking scenario that the choice of the norm for measuring the amplitude of the adversarial patterns is essential.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stress Detection Through Electrodermal Activity (EDA) and Electrocardiogram (ECG) Analysis in Car Drivers.\n \n \n \n \n\n\n \n Zontone, P.; Affanni, A.; Bernardini, R.; Piras, A.; and Rinaldo, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"StressPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902631,\n  author = {P. Zontone and A. Affanni and R. Bernardini and A. Piras and R. Rinaldo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Stress Detection Through Electrodermal Activity (EDA) and Electrocardiogram (ECG) Analysis in Car Drivers},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The stress in a driver, happening during unforeseen events or taxing situations, is linked to a subject's sympathetic system response. We present a system which detects the stress presence in car drivers through the analysis of an endosomatic Electrodermal Activity (EDA) signal, namely, Skin Potential Response (SPR), coupled with the analysis of the Electrocardiogram (ECG) signal. To log these signals we utilize a device which records the SPR from each hand of the driver, and the ECG from the chest. In the case of the SPR signal, since the hands movement injects motion artifacts, we also utilize an algorithm that dynamically selects the smoother signal coming from the two hands, and is thus able to output a clean SPR signal. Statistical features are then derived from the ECG and SPR signals, allowing their classification using a Supervised Machine Learning Algorithm. Various subjects were tested in an environment set in a company which develops professional driving simulators, both in hardware and software, and consisted in a motorized platform, a cockpit and a 180° projection screen. The test encompassed driving through a highway, with some unforeseen events happening at some positions. In the end we get a Balanced Accuracy in stress detection of 77.59 % for the considered events.},\n  keywords = {electrocardiography;learning (artificial intelligence);medical signal processing;skin;endosomatic electrodermal activity signal;skin potential response;stress presence;taxing situations;car drivers;EDA;stress detection;unforeseen events;clean SPR signal;smoother signal;hands movement;ECG;electrocardiogram signal;Stress;Electrocardiography;Support vector machines;Skin;Automobiles;Motion artifacts;Heart rate;Stress Detection;Skin Potential Response;Electrocardiogram;Motion Artifact Removal;Supervised Machine Learning Algorithm},\n  doi = {10.23919/EUSIPCO.2019.8902631},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533065.pdf},\n}\n\n
\n
\n\n\n
\n The stress in a driver, happening during unforeseen events or taxing situations, is linked to a subject's sympathetic system response. We present a system which detects the stress presence in car drivers through the analysis of an endosomatic Electrodermal Activity (EDA) signal, namely, Skin Potential Response (SPR), coupled with the analysis of the Electrocardiogram (ECG) signal. To log these signals we utilize a device which records the SPR from each hand of the driver, and the ECG from the chest. In the case of the SPR signal, since the hands movement injects motion artifacts, we also utilize an algorithm that dynamically selects the smoother signal coming from the two hands, and is thus able to output a clean SPR signal. Statistical features are then derived from the ECG and SPR signals, allowing their classification using a Supervised Machine Learning Algorithm. Various subjects were tested in an environment set in a company which develops professional driving simulators, both in hardware and software, and consisted in a motorized platform, a cockpit and a 180° projection screen. The test encompassed driving through a highway, with some unforeseen events happening at some positions. In the end we get a Balanced Accuracy in stress detection of 77.59 % for the considered events.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Real-Time Prototyping of Matlab-Java Code Integration for Water Sensor Networks Applications.\n \n \n \n \n\n\n \n Roubakis, S.; Tzagkarakis, G.; and Tsakalides, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Real-TimePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902633,\n  author = {S. Roubakis and G. Tzagkarakis and P. Tsakalides},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Real-Time Prototyping of Matlab-Java Code Integration for Water Sensor Networks Applications},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Industrial applications typically necessitate the interaction of heterogeneous software components, which makes the design of an integrated system a demanding task. Specifically, although Matlab® and Java are among the most commonly used programming languages in industrial practice, with each one offering its own advantages, however, their integration for realtime code prototyping is not straightforward. Motivated by this problem, this work proposes an efficient method based on the use of sockets to integrate Matlab and Java code for designing a data processing platform tailored to smart water sensor networks scenarios. The performance of the proposed approach is evaluated on two distinct tasks, namely, the recovery of missing values and the temporal super-resolution from streaming data. Experimental evaluation with real pressure data reveals the superiority of our methodology, in terms of reduced execution times, when compared against two well-established alternatives, namely, the use of standalone applications using input-output files for executing Matlab code in Java-based environments and socket-based solutions implemented directly in a Matlab environment.},\n  keywords = {Java;mathematics computing;real-time systems;wireless sensor networks;industrial practice;realtime code prototyping;data processing platform;smart water sensor networks scenarios;distinct tasks;pressure data;reduced execution times;standalone applications;Java-based environments;socket-based solutions;Matlab-Java code integration;industrial applications;heterogeneous software components;integrated system;programming languages;input-output files;Matlab;Java;Servers;Sockets;Real-time systems;Software;Matlab-Java code integration;client-server model;real-time prototyping;water sensor networks},\n  doi = {10.23919/EUSIPCO.2019.8902633},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533542.pdf},\n}\n\n
\n
\n\n\n
\n Industrial applications typically necessitate the interaction of heterogeneous software components, which makes the design of an integrated system a demanding task. Specifically, although Matlab® and Java are among the most commonly used programming languages in industrial practice, with each one offering its own advantages, however, their integration for realtime code prototyping is not straightforward. Motivated by this problem, this work proposes an efficient method based on the use of sockets to integrate Matlab and Java code for designing a data processing platform tailored to smart water sensor networks scenarios. The performance of the proposed approach is evaluated on two distinct tasks, namely, the recovery of missing values and the temporal super-resolution from streaming data. Experimental evaluation with real pressure data reveals the superiority of our methodology, in terms of reduced execution times, when compared against two well-established alternatives, namely, the use of standalone applications using input-output files for executing Matlab code in Java-based environments and socket-based solutions implemented directly in a Matlab environment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Estimating Mitigating the Impact of Acoustic Environments on Machine-to-Machine Signalling.\n \n \n \n\n\n \n Matt, A.; and Stowell, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902634,\n  author = {A. Matt and D. Stowell},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Estimating Mitigating the Impact of Acoustic Environments on Machine-to-Machine Signalling},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The advance of technology for transmitting Data-over-Sound in various IoT and telecommunication applications has led to the concept of machine-to-machine over-the-air acoustic signalling. Reverberation can have a detrimental effect on such machine-to-machine signals while decoding. Various methods have been studied to combat the effects of reverberation in speech and audio signals, but it is not clear how well they generalise to other sound types. We look at extending these models to facilitate machine-to-machine acoustic signalling. This research investigates dereverberation techniques to shortlist a single-channel reverberation suppression method through a pilot test. In order to apply the chosen dereverberation method a novel method of estimating acoustic parameters governing reverberation is proposed. The performance of the final algorithm is evaluated on quality metrics as well as the performance of a real machine-to-machine decoder. We demonstrate a dramatic reduction in error rate for both audible and ultrasonic signals.},\n  keywords = {acoustic signal processing;decoding;Internet of Things;reverberation;speech processing;audio signals;machine-to-machine acoustic signalling;single-channel reverberation suppression method;machine-to-machine decoder;audible signals;ultrasonic signals;IoT;telecommunication applications;data transmission;dereverberation techniques;speech signals;Chirp;Reverberation;Signal processing algorithms;Decoding;Frequency shift keying},\n  doi = {10.23919/EUSIPCO.2019.8902634},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The advance of technology for transmitting Data-over-Sound in various IoT and telecommunication applications has led to the concept of machine-to-machine over-the-air acoustic signalling. Reverberation can have a detrimental effect on such machine-to-machine signals while decoding. Various methods have been studied to combat the effects of reverberation in speech and audio signals, but it is not clear how well they generalise to other sound types. We look at extending these models to facilitate machine-to-machine acoustic signalling. This research investigates dereverberation techniques to shortlist a single-channel reverberation suppression method through a pilot test. In order to apply the chosen dereverberation method a novel method of estimating acoustic parameters governing reverberation is proposed. The performance of the final algorithm is evaluated on quality metrics as well as the performance of a real machine-to-machine decoder. We demonstrate a dramatic reduction in error rate for both audible and ultrasonic signals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Memory-Optimized Voronoi Cell-based Parallel Kernels for the Shortest Vector Problem on Lattices.\n \n \n \n \n\n\n \n Cabeleira, F.; Mariano, A.; and Falcao, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Memory-OptimizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902635,\n  author = {F. Cabeleira and A. Mariano and G. Falcao},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Memory-Optimized Voronoi Cell-based Parallel Kernels for the Shortest Vector Problem on Lattices},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper we propose a parallel implementation of a Voronoi cell-based algorithm for the Shortest Vector Problem for both CPU and GPU architectures. Additionally, we present an algorithmic simplification with particular emphasis on significantly reducing the memory usage of the implementation. According to our tests, the parallel multi-core CPU implementation scales linearly with the number of cores used, and also benefits from simultaneous multi-threading, achieving a maximum speedup of 5.56× for 8 threads. The parallel GPU implementation obtains speedups of 13.08×, compared with the sequential CPU implementation. The acceleration of this class of signal processing algorithms is a fundamental step in the evolution of post-quantum cryptanalysis. Currently, the best algorithms can take months to process for moderately low dimensions.},\n  keywords = {computational geometry;graphics processing units;multiprocessing systems;multi-threading;vectors;Voronoi cell-based algorithm;shortest vector problem;simultaneous multithreading;parallel GPU implementation;sequential CPU implementation;signal processing algorithms;memory-optimized Voronoi cell-based parallel kernels;multicore CPU implementation;post-quantum cryptanalysis;Signal processing algorithms;Lattices;Graphics processing units;Instruction sets;Cryptography;Computer architecture;Kernel;Cryptography;Voronoi;Accelerators},\n  doi = {10.23919/EUSIPCO.2019.8902635},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529174.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we propose a parallel implementation of a Voronoi cell-based algorithm for the Shortest Vector Problem for both CPU and GPU architectures. Additionally, we present an algorithmic simplification with particular emphasis on significantly reducing the memory usage of the implementation. According to our tests, the parallel multi-core CPU implementation scales linearly with the number of cores used, and also benefits from simultaneous multi-threading, achieving a maximum speedup of 5.56× for 8 threads. The parallel GPU implementation obtains speedups of 13.08×, compared with the sequential CPU implementation. The acceleration of this class of signal processing algorithms is a fundamental step in the evolution of post-quantum cryptanalysis. Currently, the best algorithms can take months to process for moderately low dimensions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Frequency Diverse Array Focusing Beampattern Synthesis With Constrained Nonlinear Programming Frequency Offsets.\n \n \n \n \n\n\n \n Cui, Y. -.; Chen, H.; and Wang, W. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FrequencyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902636,\n  author = {Y. -S. Cui and H. Chen and W. -Q. Wang},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Frequency Diverse Array Focusing Beampattern Synthesis With Constrained Nonlinear Programming Frequency Offsets},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Different phased-array only providing angle-dependent array factor, frequency diverse array (FDA) offers both angle-and range-dependent array factor. In this letter, we optimally design FDA frequency offsets by formulating a constrained nonlinear programming problem to produce range-angle focusing beampattern. The corresponding time-variance characteristics of the optimized FDA beampattern are also analyzed. The proposed method is verified by simulation results, which superiority to existing Log-FDA, Hamming-FDA and GA-FDA methods are validated.},\n  keywords = {antenna phased arrays;nonlinear programming;constrained nonlinear programming frequency offsets;providing angle-dependent array factor;range-dependent;FDA frequency offsets;constrained nonlinear programming problem;range-angle focusing beampattern;corresponding time-variance characteristics;optimized FDA beampattern;Log-FDA;Hamming-FDA;GA-FDA methods;frequency diverse array focusing beampattern synthesis;Focusing;Frequency diversity;Array signal processing;Radar;Frequency modulation;Radar antennas;Antenna arrays;Frequency diverse array (FDA);constrained nonlinear problem;time-variance;beampattern synthesis;beampattern focusing},\n  doi = {10.23919/EUSIPCO.2019.8902636},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531702.pdf},\n}\n\n
\n
\n\n\n
\n Different phased-array only providing angle-dependent array factor, frequency diverse array (FDA) offers both angle-and range-dependent array factor. In this letter, we optimally design FDA frequency offsets by formulating a constrained nonlinear programming problem to produce range-angle focusing beampattern. The corresponding time-variance characteristics of the optimized FDA beampattern are also analyzed. The proposed method is verified by simulation results, which superiority to existing Log-FDA, Hamming-FDA and GA-FDA methods are validated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Segmentation of Head and Neck Tumours Using Modified U-net.\n \n \n \n \n\n\n \n Zhao, B.; Soraghan, J.; Caterina, G. D.; and Grose, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-4, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SegmentationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902637,\n  author = {B. Zhao and J. Soraghan and G. D. Caterina and D. Grose},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Segmentation of Head and Neck Tumours Using Modified U-net},\n  year = {2019},\n  pages = {1-4},\n  abstract = {A new neural network for automatic head and neck cancer (HNC) segmentation from magnetic resonance imaging (MRI) is presented. The proposed neural network is based on U-net, which combines features from different resolutions to achieve end-to-end locating and segmentation of medical images. In this work, the dilated convolution is introduced into U-net, to obtain larger receptive field so that extract multi-scale features. Also, this network uses Dice loss to reduce the imbalance between classes. The proposed algorithm is trained and tested on real MRI data. The cross-validation results show that the new network outperformed the original U-net by 5% (Dice score) on head and neck tumour segmentation.},\n  keywords = {biomedical MRI;cancer;feature extraction;image segmentation;medical image processing;neural nets;tumours;modified U-net;neural network;automatic head;magnetic resonance imaging;medical images;dilated convolution;multiscale features;MRI data;neck tumour segmentation;head cancer segmentation;medical image segmentation;Dice loss;Image segmentation;Convolution;Head;Neck;Tumors;Cancer;Magnetic resonance imaging;MRI data;Head and neck cancer;U-net;dilated convolution;semantic segmentation},\n  doi = {10.23919/EUSIPCO.2019.8902637},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533799.pdf},\n}\n\n
\n
\n\n\n
\n A new neural network for automatic head and neck cancer (HNC) segmentation from magnetic resonance imaging (MRI) is presented. The proposed neural network is based on U-net, which combines features from different resolutions to achieve end-to-end locating and segmentation of medical images. In this work, the dilated convolution is introduced into U-net, to obtain larger receptive field so that extract multi-scale features. Also, this network uses Dice loss to reduce the imbalance between classes. The proposed algorithm is trained and tested on real MRI data. The cross-validation results show that the new network outperformed the original U-net by 5% (Dice score) on head and neck tumour segmentation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Free Registration Based Shape Prior for Active Contours.\n \n \n \n \n\n\n \n sakly , I.; Mezghich, M. A.; M’hiri, S.; and Ghorbel, F.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FreePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902638,\n  author = {I. sakly and M. A. Mezghich and S. M’hiri and F. Ghorbel},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Free Registration Based Shape Prior for Active Contours},\n  year = {2019},\n  pages = {1-5},\n  abstract = {A novel method of active contours with shape prior knowledge is presented in this research in order to improve its robustness for partially occluded objects. Our prior is based on a free-registration shape template estimation by using a complete and stable set of invariant descriptors. The prior template is then incorporated into the level set model, using a stopping function that updates the evolving curve only in the region of variability between the active contour and the researched template so that the computation time is reduced considerably. The proposed framework is demonstrated using both simulated and real data involving the segmentation of occluded and noisy images. Results show better robustness and stability compared to the well-known methods using statistics.},\n  keywords = {image registration;image segmentation;shape recognition;statistics;active contour;shape prior knowledge;partially occluded objects;free-registration shape template estimation;stable set;level set model;free registration based shape prior;statistics;Shape;Active contours;Level set;Image segmentation;Mathematical model;Europe;Signal processing;Invariant descriptors;active contour;free registration},\n  doi = {10.23919/EUSIPCO.2019.8902638},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533842.pdf},\n}\n\n
\n
\n\n\n
\n A novel method of active contours with shape prior knowledge is presented in this research in order to improve its robustness for partially occluded objects. Our prior is based on a free-registration shape template estimation by using a complete and stable set of invariant descriptors. The prior template is then incorporated into the level set model, using a stopping function that updates the evolving curve only in the region of variability between the active contour and the researched template so that the computation time is reduced considerably. The proposed framework is demonstrated using both simulated and real data involving the segmentation of occluded and noisy images. Results show better robustness and stability compared to the well-known methods using statistics.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Separation of independent/dependent sources using copulas.\n \n \n \n\n\n \n Mamouni, N.; Fenniri, H.; Ghazdali, A.; Hakim, A.; and Keziou, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902639,\n  author = {N. Mamouni and H. Fenniri and A. Ghazdali and A. Hakim and A. Keziou},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Separation of independent/dependent sources using copulas},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we introduce a new convolutive blind source separation approach for independent/dependent source components. The proposed approach represents an efficient tool for separating linear convolutive mixing models, especially, when the source components are statistically dependent. Its efficiency is illustrated by some simulation results.},\n  keywords = {blind source separation;convolution;statistical analysis;linear convolutive mixing models;convolutive blind source separation approach;independent-dependent source components;statistical analysis;Blind source separation;Finite impulse response filters;Europe;Distribution functions;Sensors;Kernel;Blind source separation;Kullback-Leibler divergence;Copulas;Dependent source components},\n  doi = {10.23919/EUSIPCO.2019.8902639},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n In this paper, we introduce a new convolutive blind source separation approach for independent/dependent source components. The proposed approach represents an efficient tool for separating linear convolutive mixing models, especially, when the source components are statistically dependent. Its efficiency is illustrated by some simulation results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Monaural Source Separation Based on Sequentially Trained LSTMs in Real Room Environments.\n \n \n \n \n\n\n \n Li, Y.; Sun, Y.; and Naqvi, S. M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"MonauralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902640,\n  author = {Y. Li and Y. Sun and S. M. Naqvi},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Monaural Source Separation Based on Sequentially Trained LSTMs in Real Room Environments},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In recent studies on Monaural Source Separation (MSS), the long short-term memory (LSTM) network has been introduced to solve this problem, however, its performance is still limited particularly in real room environments. According to the training objectives, the LSTM-based MSS is categorized into three aspects, namely mapping, masking and signal approximation (SA) based methods. In this paper, we introduce dereverberation mask (DM) and establish a system to train two SA-LSTMs sequentially, which dereverberate speech mixture and improve the separation performance. The DM is exploited as the training target of the first LSTM. Then, the enhanced ratio mask (ERM) is proposed and set as the training target of the second LSTM. We evaluate the proposed method with the IEEE and the TIMIT datasets with real room impulse responses and noise interferences from the NOISEX dataset. The detailed evaluations confirm that the proposed method outperforms the state-of-the-art.},\n  keywords = {approximation theory;learning (artificial intelligence);mixture models;recurrent neural nets;reverberation;signal denoising;source separation;speech processing;transient response;Monaural Source Separation;sequentially trained LSTMs;real room environments;long short-term memory network;training objectives;LSTM-based MSS;dereverberation mask;DM;dereverberate speech mixture;separation performance;training target;enhanced ratio mask;room impulse responses;SA-LSTMs;signal approximation based methods;mapping method;masking method;TIMIT datasets;IEEE datasets;NOISEX dataset;Training;Signal to noise ratio;Feature extraction;Time-frequency analysis;Interference;Testing;Source separation;Monaura1 source separation;long short-term memory;signal approximation;dereverberation mask;enhanced ratio mask},\n  doi = {10.23919/EUSIPCO.2019.8902640},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533632.pdf},\n}\n\n
\n
\n\n\n
\n In recent studies on Monaural Source Separation (MSS), the long short-term memory (LSTM) network has been introduced to solve this problem, however, its performance is still limited particularly in real room environments. According to the training objectives, the LSTM-based MSS is categorized into three aspects, namely mapping, masking and signal approximation (SA) based methods. In this paper, we introduce dereverberation mask (DM) and establish a system to train two SA-LSTMs sequentially, which dereverberate speech mixture and improve the separation performance. The DM is exploited as the training target of the first LSTM. Then, the enhanced ratio mask (ERM) is proposed and set as the training target of the second LSTM. We evaluate the proposed method with the IEEE and the TIMIT datasets with real room impulse responses and noise interferences from the NOISEX dataset. The detailed evaluations confirm that the proposed method outperforms the state-of-the-art.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient Adaptive Multiple Importance Sampling.\n \n \n \n \n\n\n \n El-Laham, Y.; Martino, L.; Elvira, V.; and Bugallo, M. F.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902642,\n  author = {Y. El-Laham and L. Martino and V. Elvira and M. F. Bugallo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient Adaptive Multiple Importance Sampling},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The adaptive multiple importance sampling (AMIS) algorithm is a powerful Monte Carlo tool for Bayesian estimation in intractable models. The uniqueness of this methodology from other adaptive importance sampling (AIS) schemes is in the weighting procedure, where at each iteration of the algorithm, all samples are re-weighted according to the temporal deterministic mixture approach. This re-weighting allows for substantial variance reduction of the AMIS estimator, at the expense of an increased computational cost that grows quadratically with the number of iterations. In this paper, we propose a novel AIS methodology which obtains most of the AMIS variance reduction while improving upon its computational complexity. The proposed method implements an approximate version of the temporal deterministic mixture approach and requires substantially less computation. Advantages are shown empirically through a numerical example, where the novel method is able to attain a desired mean-squared error with much less computation.},\n  keywords = {importance sampling;iterative methods;Monte Carlo methods;AMIS variance reduction;temporal deterministic mixture approach;adaptive multiple importance sampling algorithm;powerful Monte Carlo tool;Bayesian estimation;intractable models;substantial variance reduction;AMIS estimator;AIS methodology;mean-squared error;Proposals;Artificial intelligence;Monte Carlo methods;Computational efficiency;Approximation algorithms;Standards;Probability density function},\n  doi = {10.23919/EUSIPCO.2019.8902642},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532328.pdf},\n}\n\n
\n
\n\n\n
\n The adaptive multiple importance sampling (AMIS) algorithm is a powerful Monte Carlo tool for Bayesian estimation in intractable models. The uniqueness of this methodology from other adaptive importance sampling (AIS) schemes is in the weighting procedure, where at each iteration of the algorithm, all samples are re-weighted according to the temporal deterministic mixture approach. This re-weighting allows for substantial variance reduction of the AMIS estimator, at the expense of an increased computational cost that grows quadratically with the number of iterations. In this paper, we propose a novel AIS methodology which obtains most of the AMIS variance reduction while improving upon its computational complexity. The proposed method implements an approximate version of the temporal deterministic mixture approach and requires substantially less computation. Advantages are shown empirically through a numerical example, where the novel method is able to attain a desired mean-squared error with much less computation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n HMM-based Convolutional LSTM for Visual Scanpath Prediction.\n \n \n \n \n\n\n \n Verma, A.; and Sen, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"HMM-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902643,\n  author = {A. Verma and D. Sen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {HMM-based Convolutional LSTM for Visual Scanpath Prediction},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The human visual system performs a dynamic process of scanning the scene by rapid eye movements and fixations, yielding a visual scanpath. We propose an approach to generate artificial visual scanpaths for natural images. A convolutional long short term memory (LSTM) neural network is employed, which learns the mapping of image features to eye fixations by modeling the sequential dependencies of the fixations in a scanpath. A novel approach of hidden Markov model (HMM) based data augmentation is presented that increases the number of available image-specific input-output pairs to train the LSTM appropriately. Both the HMM and the LSTM are designed to be consistent with existing knowledge on saccadic eye movements. Experimental results on a standard eye-tracking dataset demonstrate that the proposed approach does better than the state-of-the-art and generates realistic visual scanpath data.},\n  keywords = {biomechanics;convolutional neural nets;eye;feature extraction;hidden Markov models;learning (artificial intelligence);recurrent neural nets;rapid eye movements;artificial visual scanpaths;natural images;convolutional long short term memory neural network;image features;eye fixations;sequential dependencies;hidden Markov model based data augmentation;saccadic eye movements;standard eye-tracking dataset;realistic visual scanpath data;HMM-based convolutional LSTM;visual scanpath prediction;human visual system;dynamic process;image-specific input-output pairs;Hidden Markov models;Visualization;Feature extraction;Convolution;Training;Mathematical model;Visual scanpath prediction;eye tracking;fixations;saccades;Convolutional LSTM},\n  doi = {10.23919/EUSIPCO.2019.8902643},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533776.pdf},\n}\n\n
\n
\n\n\n
\n The human visual system performs a dynamic process of scanning the scene by rapid eye movements and fixations, yielding a visual scanpath. We propose an approach to generate artificial visual scanpaths for natural images. A convolutional long short term memory (LSTM) neural network is employed, which learns the mapping of image features to eye fixations by modeling the sequential dependencies of the fixations in a scanpath. A novel approach of hidden Markov model (HMM) based data augmentation is presented that increases the number of available image-specific input-output pairs to train the LSTM appropriately. Both the HMM and the LSTM are designed to be consistent with existing knowledge on saccadic eye movements. Experimental results on a standard eye-tracking dataset demonstrate that the proposed approach does better than the state-of-the-art and generates realistic visual scanpath data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hue Modification Localization By Pair Matching.\n \n \n \n \n\n\n \n Phan, Q. -.; Vascotto, M.; and Boato, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"HuePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902645,\n  author = {Q. -T. Phan and M. Vascotto and G. Boato},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Hue Modification Localization By Pair Matching},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Hue modification is the adjustment of hue property on color images. Conducting hue modification on an image is trivial, and it can be abused to falsify opinions of viewers. Since shapes, edges or textural information remains unchanged after hue modification, this type of manipulation is relatively hard to be detected and localized. Based on the fact that small patches inherit the same Color Filter Array (CFA) configuration and demosaicing, any distortion made by local hue modification can be detected by patch matching within the same image. In this paper, we propose to localize hue modification by means of a Siamese neural network specifically designed for matching two inputs. By crafting the network outputs, we are able to form a heatmap which potentially highlights malicious regions. Our proposed method deals well not only with uncompressed images but also with the presence of JPEG compression, an operation usually hindering the exploitation of CFA and demosaicing artifacts. Experimental evidences corroborate the effectiveness of the proposed method.},\n  keywords = {data compression;filtering theory;image coding;image colour analysis;image matching;image segmentation;image texture;neural nets;optical filters;demosaicing;hue property;hue modification localization;local hue modification;Image color analysis;Heating systems;Transform coding;Image coding;Feature extraction;Training;Cameras;Hue modification;patch matching;Siamese network},\n  doi = {10.23919/EUSIPCO.2019.8902645},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533734.pdf},\n}\n\n
\n
\n\n\n
\n Hue modification is the adjustment of hue property on color images. Conducting hue modification on an image is trivial, and it can be abused to falsify opinions of viewers. Since shapes, edges or textural information remains unchanged after hue modification, this type of manipulation is relatively hard to be detected and localized. Based on the fact that small patches inherit the same Color Filter Array (CFA) configuration and demosaicing, any distortion made by local hue modification can be detected by patch matching within the same image. In this paper, we propose to localize hue modification by means of a Siamese neural network specifically designed for matching two inputs. By crafting the network outputs, we are able to form a heatmap which potentially highlights malicious regions. Our proposed method deals well not only with uncompressed images but also with the presence of JPEG compression, an operation usually hindering the exploitation of CFA and demosaicing artifacts. Experimental evidences corroborate the effectiveness of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Non-Intrusive POLQA Estimation of Speech Quality using Recurrent Neural Networks.\n \n \n \n \n\n\n \n Sharma, D.; Hogg, A. O. T.; Wang, Y.; Nour-Eldin, A.; and Naylor, P. A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Non-IntrusivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902646,\n  author = {D. Sharma and A. O. T. Hogg and Y. Wang and A. Nour-Eldin and P. A. Naylor},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Non-Intrusive POLQA Estimation of Speech Quality using Recurrent Neural Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Estimating the quality of speech without the use of a clean reference signal is a challenging problem, in part due to the time and expense required to collect sufficient training data for modern machine learning algorithms. We present a novel, non-intrusive estimator that exploits recurrent neural network architectures to predict the intrusive POLQA score of a speech signal in a short time context. The predictor is based on a novel compressed representation of modulation domain features, used in conjunction with static MFCC features. We show that the proposed method can reliably predict POLQA with a 300 ms context, achieving a mean absolute error of 0.21 on unseen data. The proposed method is trained using English speech and is shown to generalize well across unseen languages. The neural network also jointly estimates the mean voice activity detection (VAD) with an F1 accuracy score of 0.9, removing the need for an external VAD.},\n  keywords = {learning (artificial intelligence);recurrent neural nets;signal representation;speech recognition;voice activity detection;machine learning algorithms;nonintrusive estimator;recurrent neural network architectures;intrusive POLQA score;speech signal;compressed representation;modulation domain features;static MFCC features;English speech;nonintrusive POLQA estimation;speech quality;recurrent neural networks;mean voice activity detection;Training data;Feature extraction;Frequency modulation;Estimation;Speech processing;Mel frequency cepstral coefficient;speech quality estimation;POLQA estimation;deep neural networks},\n  doi = {10.23919/EUSIPCO.2019.8902646},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528486.pdf},\n}\n\n
\n
\n\n\n
\n Estimating the quality of speech without the use of a clean reference signal is a challenging problem, in part due to the time and expense required to collect sufficient training data for modern machine learning algorithms. We present a novel, non-intrusive estimator that exploits recurrent neural network architectures to predict the intrusive POLQA score of a speech signal in a short time context. The predictor is based on a novel compressed representation of modulation domain features, used in conjunction with static MFCC features. We show that the proposed method can reliably predict POLQA with a 300 ms context, achieving a mean absolute error of 0.21 on unseen data. The proposed method is trained using English speech and is shown to generalize well across unseen languages. The neural network also jointly estimates the mean voice activity detection (VAD) with an F1 accuracy score of 0.9, removing the need for an external VAD.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Intersymbol and Intercarrier Interference in OFDM Transmissions Through Highly Dispersive Channels.\n \n \n \n \n\n\n \n Martins, W. A.; Cruz–Roldán, F.; Moonen, M.; and Ramirez Diniz, P. S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"IntersymbolPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902648,\n  author = {W. A. Martins and F. Cruz–Roldán and M. Moonen and P. S. {Ramirez Diniz}},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Intersymbol and Intercarrier Interference in OFDM Transmissions Through Highly Dispersive Channels},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This work quantifies intersymbol and intercarrier interference induced by very dispersive channels in OFDM systems. The resulting achievable data rate for suboptimal OFDM transmissions is derived based on the computation of the actual signal-to-interference-plus-noise ratio for arbitrary length finite duration channel impulse responses. Simulation results point to significant differences between data rates obtained via conventional formulations, for which interference is supposed to be limited to two or three blocks, versus the data rates considering the actual channel dispersion.},\n  keywords = {dispersive channels;intercarrier interference;intersymbol interference;OFDM modulation;transient response;wireless channels;data rates;channel dispersion;highly dispersive channels;intersymbol interference;intercarrier interference;OFDM systems;achievable data rate;suboptimal OFDM transmissions;actual signal-to-interference-plus-noise ratio;channel impulse responses;Interference;OFDM;Signal to noise ratio;Transceivers;Dispersion;Equalizers;Delays;Orthogonal frequency-division multiplexing;highly dispersive channels;intersymbol interference;intercarrier interference;cyclic prefix;zero padding},\n  doi = {10.23919/EUSIPCO.2019.8902648},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570525607.pdf},\n}\n\n
\n
\n\n\n
\n This work quantifies intersymbol and intercarrier interference induced by very dispersive channels in OFDM systems. The resulting achievable data rate for suboptimal OFDM transmissions is derived based on the computation of the actual signal-to-interference-plus-noise ratio for arbitrary length finite duration channel impulse responses. Simulation results point to significant differences between data rates obtained via conventional formulations, for which interference is supposed to be limited to two or three blocks, versus the data rates considering the actual channel dispersion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n New algorithms on Complex Joint Eigenvalue Decomposition Based on Generalized Givens Rotations.\n \n \n \n \n\n\n \n Mesloub, A.; Belouchrani, A.; and Abed-Meraim, K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"NewPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902649,\n  author = {A. Mesloub and A. Belouchrani and K. Abed-Meraim},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {New algorithms on Complex Joint Eigenvalue Decomposition Based on Generalized Givens Rotations},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, new joint eigenvalue decomposition (JEVD) methods are developed by considering generalized Givens rotations. These algorithms deal with a set of square complex matrices sharing a same eigen-structure. Several existing methods, using or not generalized Givens rotations, have treated the aforementioned problem. To improve the JEVD solutions, we developed two methods, the first one is numerically stable and efficient but relatively expensive. The second one is developed by considering some justified approximations. Simulation results are provided to highlight the effectiveness and behaviour of the proposed techniques for different scenarios.},\n  keywords = {eigenvalues and eigenfunctions;matrix algebra;complex joint eigenvalue decomposition;generalized Givens rotations;joint eigenvalue decomposition methods;square complex matrices;JEVD methods;eigen-structure;JEVD solutions;Signal processing algorithms;Matrix decomposition;Eigenvalues and eigenfunctions;Approximation algorithms;Europe;Signal processing;Transforms;Complex Joint EigenValue Decomposition (JEVD);Complex Efficient and Stable Joint eigenvalue Decomposition algorithm (CESJD);generalized Givens rotations;exact JEVD;approximative JEVD},\n  doi = {10.23919/EUSIPCO.2019.8902649},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528085.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, new joint eigenvalue decomposition (JEVD) methods are developed by considering generalized Givens rotations. These algorithms deal with a set of square complex matrices sharing a same eigen-structure. Several existing methods, using or not generalized Givens rotations, have treated the aforementioned problem. To improve the JEVD solutions, we developed two methods, the first one is numerically stable and efficient but relatively expensive. The second one is developed by considering some justified approximations. Simulation results are provided to highlight the effectiveness and behaviour of the proposed techniques for different scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Refined WaveNet Vocoder for Variational Autoencoder Based Voice Conversion.\n \n \n \n \n\n\n \n Huang, W. -.; Wu, Y. -.; Hwang, H. -.; Tobing, P. L.; Hayashi, T.; Kobayashi, K.; Toda, T.; Tsao, Y.; and Wang, H. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RefinedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902651,\n  author = {W. -C. Huang and Y. -C. Wu and H. -T. Hwang and P. L. Tobing and T. Hayashi and K. Kobayashi and T. Toda and Y. Tsao and H. -M. Wang},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Refined WaveNet Vocoder for Variational Autoencoder Based Voice Conversion},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper presents a refinement framework of WaveNet vocoders for variational autoencoder (VAE) based voice conversion (VC), which reduces the quality distortion caused by the mismatch between the training data and testing data. Conventional WaveNet vocoders are trained with natural acoustic features but conditioned on the converted features in the conversion stage for VC, and such a mismatch often causes significant quality and similarity degradation. In this work, we take advantage of the particular structure of VAEs to refine WaveNet vocoders with the self-reconstructed features generated by VAE, which are of similar characteristics with the converted features while having the same temporal structure with the target natural features. We analyze these features and show that the self-reconstructed features are similar to the converted features. Objective and subjective experimental results demonstrate the effectiveness of our proposed framework.},\n  keywords = {feature extraction;learning (artificial intelligence);speech coding;vocoders;WaveNet vocoder;self-reconstructed features;VAE;converted features;target natural features;refinement framework;variational autoencoder based voice conversion;quality distortion;training data;testing data;conventional WaveNet vocoders;natural acoustic features;conversion stage;significant quality;similarity degradation;Vocoders;Training;Training data;Feature extraction;Europe;Signal processing;Acoustics;voice conversion;variational autoencoder;WaveNet vocoder;speaker adaptation},\n  doi = {10.23919/EUSIPCO.2019.8902651},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530389.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a refinement framework of WaveNet vocoders for variational autoencoder (VAE) based voice conversion (VC), which reduces the quality distortion caused by the mismatch between the training data and testing data. Conventional WaveNet vocoders are trained with natural acoustic features but conditioned on the converted features in the conversion stage for VC, and such a mismatch often causes significant quality and similarity degradation. In this work, we take advantage of the particular structure of VAEs to refine WaveNet vocoders with the self-reconstructed features generated by VAE, which are of similar characteristics with the converted features while having the same temporal structure with the target natural features. We analyze these features and show that the self-reconstructed features are similar to the converted features. Objective and subjective experimental results demonstrate the effectiveness of our proposed framework.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fully Adaptive Savitzky-Golay Type Smoothers.\n \n \n \n \n\n\n \n Niedźwiecki, M.; and Ciołek, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FullyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902652,\n  author = {M. Niedźwiecki and M. Ciołek},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Fully Adaptive Savitzky-Golay Type Smoothers},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The problem of adaptive signal smoothing is considered and solved using the weighted basis function approach. In the special case of polynomial basis and uniform weighting the proposed method reduces down to the celebrated Savitzky-Golay smoother. Data adaptiveness is achieved via parallel estimation. It is shown that for the polynomial and harmonic bases and cosinusoidal weighting sequences, the competing signal estimates can be computed in both time-recursive and order-recursive way.},\n  keywords = {adaptive signal processing;polynomials;signal resolution;smoothing methods;adaptive Savitzky-Golay type smoothers;adaptive signal smoothing;weighted basis function approach;polynomial basis;celebrated Savitzky-Golay smoother;data adaptiveness;parallel estimation;cosinusoidal weighting sequences;signal estimates;harmonic bases;Estimation;Noise measurement;Smoothing methods;Microsoft Windows;Electrocardiography;Harmonic analysis;Bandwidth;signal denoising;adaptive selection of estimation bandwidth and model order;Savitzky-Golay smoothers},\n  doi = {10.23919/EUSIPCO.2019.8902652},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531404.pdf},\n}\n\n
\n
\n\n\n
\n The problem of adaptive signal smoothing is considered and solved using the weighted basis function approach. In the special case of polynomial basis and uniform weighting the proposed method reduces down to the celebrated Savitzky-Golay smoother. Data adaptiveness is achieved via parallel estimation. It is shown that for the polynomial and harmonic bases and cosinusoidal weighting sequences, the competing signal estimates can be computed in both time-recursive and order-recursive way.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Loss functions for denoising compressed images: a comparative study.\n \n \n \n \n\n\n \n Oberlin, T.; Malgouyres, F.; and Wu, J. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"LossPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902653,\n  author = {T. Oberlin and F. Malgouyres and J. -Y. Wu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Loss functions for denoising compressed images: a comparative study},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper faces the problem of denoising compressed images, obtained through a quantization in a known basis. The denoising is formulated as a variational inverse problem regularized by total variation, the emphasis being placed on the data-fidelity term which measures the distance between the noisy observation and the reconstruction. The paper introduces two new loss functions to jointly denoise and dequantize the corrupted image, which fully exploit the knowledge about the compression process, i.e., the transform and the quantization steps. Several numerical experiments demonstrate the effectiveness of the proposed loss functions and compare their performance with two more classical ones.},\n  keywords = {data compression;image coding;image denoising;image reconstruction;inverse problems;quantisation (signal);wavelet transforms;loss functions;known basis;variational inverse problem;total variation;data-fidelity term;compression process;compressed image denoising;Image coding;Noise reduction;Transform coding;Signal to noise ratio;Quantization (signal);Noise measurement;Gaussian noise;Image denoising;Image decompression;Compression artifacts;Bayesian denoising},\n  doi = {10.23919/EUSIPCO.2019.8902653},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532732.pdf},\n}\n\n
\n
\n\n\n
\n This paper faces the problem of denoising compressed images, obtained through a quantization in a known basis. The denoising is formulated as a variational inverse problem regularized by total variation, the emphasis being placed on the data-fidelity term which measures the distance between the noisy observation and the reconstruction. The paper introduces two new loss functions to jointly denoise and dequantize the corrupted image, which fully exploit the knowledge about the compression process, i.e., the transform and the quantization steps. Several numerical experiments demonstrate the effectiveness of the proposed loss functions and compare their performance with two more classical ones.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust Compressive Spectral Image Recovery Algorithm Using Dictionary Learning and Transform Tensor SVD.\n \n \n \n \n\n\n \n Fonseca, Y.; Gelvez, T.; and Fuentes, H. A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902654,\n  author = {Y. Fonseca and T. Gelvez and H. A. Fuentes},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust Compressive Spectral Image Recovery Algorithm Using Dictionary Learning and Transform Tensor SVD},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper proposes a low-rank tensor minimization algorithm to recover a spectral image (SI) from a set of compressed observations. The proposal takes advantage of the transform tensor singular value decomposition (tt-SVD) to promote a low-rank structure on the recovered SI. The methodology has three stages. First, a poor low-rank version of the SI is estimated using the tt-SVD framework with the discrete cosine transform (DCT). Then, an orthogonal transform is learned from the initial estimation using dictionary learning. Finally, an algorithm to find a low-rank approximation of the SI in both, the DCT and the learned transform is introduced. Quantitative evaluation over two databases and two compressive optical systems shows that the proposed method improves the reconstruction quality in up to 10dB as well as it is robust in the presence of noise.},\n  keywords = {approximation theory;discrete cosine transforms;image reconstruction;singular value decomposition;tensors;dictionary learning;low-rank approximation;DCT;robust compressive spectral image recovery algorithm;low-rank tensor minimization algorithm;transform tensor singular value decomposition;tt-SVD framework;compressive optical systems;discrete cosine transform;Tensors;Discrete cosine transforms;Estimation;Inverse problems;Machine learning;Image coding;Compressive spectral imaging;Transform tensor singular value decomposition;Dictionary learning},\n  doi = {10.23919/EUSIPCO.2019.8902654},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533646.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a low-rank tensor minimization algorithm to recover a spectral image (SI) from a set of compressed observations. The proposal takes advantage of the transform tensor singular value decomposition (tt-SVD) to promote a low-rank structure on the recovered SI. The methodology has three stages. First, a poor low-rank version of the SI is estimated using the tt-SVD framework with the discrete cosine transform (DCT). Then, an orthogonal transform is learned from the initial estimation using dictionary learning. Finally, an algorithm to find a low-rank approximation of the SI in both, the DCT and the learned transform is introduced. Quantitative evaluation over two databases and two compressive optical systems shows that the proposed method improves the reconstruction quality in up to 10dB as well as it is robust in the presence of noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Monaural Speech Separation with Deep Learning Using Phase Modelling and Capsule Networks.\n \n \n \n \n\n\n \n Staines, T.; Weyde, T.; and Galkin, O.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"MonauralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902655,\n  author = {T. Staines and T. Weyde and O. Galkin},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Monaural Speech Separation with Deep Learning Using Phase Modelling and Capsule Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The removal of background noise from speech audio is a problem with high practical relevance. A variety of deep learning approaches have been applied to it in recent years, most of which operate on a magnitude spectrogram representation of a noisy recording to estimate the isolated speaking voice. This work investigates ways to include phase information, which is commonly discarded, firstly within a convolutional neural network (CNN) architecture, and secondly by applying capsule networks, to our knowledge the first time capsules have been used in source separation. We present a Circular Loss function, which takes into account the periodic nature of phase. Our results show that the inclusion of phase information leads to an improvement in the quality of speech separation. We also find that in our experiments convolutional neural networks outperform capsule networks at speech separation.},\n  keywords = {audio signal processing;convolutional neural nets;learning (artificial intelligence);neural net architecture;source separation;speech processing;monaural speech separation;phase modelling;capsule networks;background noise;speech audio;deep learning approaches;magnitude spectrogram representation;noisy recording;isolated speaking voice;phase information;convolutional neural network architecture;time capsules;source separation;CNN architecture;circular loss function;Spectrogram;Noise measurement;Computational modeling;Source separation;Standards;Convolution;Training;Speech Separation;Speech Enhancement;Capsules;Phase;Convolutional Neural Networks},\n  doi = {10.23919/EUSIPCO.2019.8902655},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533848.pdf},\n}\n\n
\n
\n\n\n
\n The removal of background noise from speech audio is a problem with high practical relevance. A variety of deep learning approaches have been applied to it in recent years, most of which operate on a magnitude spectrogram representation of a noisy recording to estimate the isolated speaking voice. This work investigates ways to include phase information, which is commonly discarded, firstly within a convolutional neural network (CNN) architecture, and secondly by applying capsule networks, to our knowledge the first time capsules have been used in source separation. We present a Circular Loss function, which takes into account the periodic nature of phase. Our results show that the inclusion of phase information leads to an improvement in the quality of speech separation. We also find that in our experiments convolutional neural networks outperform capsule networks at speech separation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Atomic Norms In Group Sliding Sparse Recovery.\n \n \n \n \n\n\n \n Sanchez, C. B.; Gregoratti, D.; and Mestre, X.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AtomicPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902657,\n  author = {C. B. Sanchez and D. Gregoratti and X. Mestre},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Atomic Norms In Group Sliding Sparse Recovery},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper tackles a compressed sensing problem with the unknown signal showing a flexible block sparsity structure, where flexible means that blocks of nonzero elements have no predetermined positions and only their minimum length is known. By capitalizing on the Minkowsky functional, the related support recovery problem is written in terms of a new vector norm that outperforms the classic l1 norm in describing the considered sparsity structure. Also, the minimum number of measurements that are needed for perfect reconstruction is estimated by the Gaussian-width analysis of the new norm.},\n  keywords = {compressed sensing;Gaussian processes;signal reconstruction;vectors;atomic norms;group sliding sparse recovery;compressed sensing problem;unknown signal;flexible block sparsity structure;nonzero elements;predetermined positions;minimum length;Minkowsky functional;support recovery problem;vector norm;sparsity structure;Atomic measurements;Europe;Signal processing;Noise measurement;Convex functions;Compressed sensing;Sparse matrices},\n  doi = {10.23919/EUSIPCO.2019.8902657},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533382.pdf},\n}\n\n
\n
\n\n\n
\n This paper tackles a compressed sensing problem with the unknown signal showing a flexible block sparsity structure, where flexible means that blocks of nonzero elements have no predetermined positions and only their minimum length is known. By capitalizing on the Minkowsky functional, the related support recovery problem is written in terms of a new vector norm that outperforms the classic l1 norm in describing the considered sparsity structure. Also, the minimum number of measurements that are needed for perfect reconstruction is estimated by the Gaussian-width analysis of the new norm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Art of Teaching Computers: The SIMSSA Optical Music Recognition Workflow System.\n \n \n \n \n\n\n \n Fujinaga, I.; and Vigliensoni, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902658,\n  author = {I. Fujinaga and G. Vigliensoni},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {The Art of Teaching Computers: The SIMSSA Optical Music Recognition Workflow System},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In many machine learning systems, it would be effective to create a pedagogical environment where both the machines and the humans can incrementally learn to solve problems through interaction and adaptation. We are designing an optical music recognition workflow system within the SIMSSA (Single Interface for Music Score Searching and Analysis) project, where human operators/teachers can intervene to correct and teach the system at certain stages in the optical music recognition process so that both parties can learn from the errors and, consequently, the overall performance is increased progressively as more music scores are processed. In this environment, the humans are learning how to teach the machine more effectively.},\n  keywords = {computer aided instruction;learning (artificial intelligence);music;machine learning systems;pedagogical environment;optical music recognition process;music scores;computer teaching;SIMSSA optical music recognition workflow system;single interface for music score searching and analysis;optical music recognition;machine learning;machine pedagogy},\n  doi = {10.23919/EUSIPCO.2019.8902658},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533142.pdf},\n}\n\n
\n
\n\n\n
\n In many machine learning systems, it would be effective to create a pedagogical environment where both the machines and the humans can incrementally learn to solve problems through interaction and adaptation. We are designing an optical music recognition workflow system within the SIMSSA (Single Interface for Music Score Searching and Analysis) project, where human operators/teachers can intervene to correct and teach the system at certain stages in the optical music recognition process so that both parties can learn from the errors and, consequently, the overall performance is increased progressively as more music scores are processed. In this environment, the humans are learning how to teach the machine more effectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n fMRI BOLD signal decomposition using a multivariate low-rank model.\n \n \n \n \n\n\n \n Cherkaoui, H.; Moreau, T.; Halimi, A.; and Ciuciu, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"fMRIPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902660,\n  author = {H. Cherkaoui and T. Moreau and A. Halimi and P. Ciuciu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {fMRI BOLD signal decomposition using a multivariate low-rank model},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Standard methodologies for functional Magnetic Resonance Imaging (fMRI) data analysis decompose the observed Blood Oxygenation Level Dependent (BOLD) signals using voxel-wise linear model and perform maximum likelihood estimation to get the parameters associated with the regressors. In task fMRI, the latter are usually defined from the experimental paradigm and some confounds whereas in resting-state acquisitions, a seed-voxel time-course may be used as predictor. Nowadays, most fMRI datasets offer resting-state acquisitions, requiring multivariate approaches (e.g., PCA, ICA, etc) to extract meaningful information in a data-driven manner. Here, we propose a novel low-rank model of fMRI BOLD data but instead of considering a dimension reduction in space as in ICA, our model relies on convolutional sparse coding between the hemodynamic system and a few temporal atoms which code for the neural activity inducing signals. A rank -1 constraint is also associated with each temporal atom to spatially map its influence in the brain. Within a variational framework, the joint estimation of the neural signals and the associated spatial maps is formulated as a nonconvex optimization problem. A local minimizer is computed using an efficient alternate minimization algorithm. The proposed approach is first validated on simulations and then applied to task fMRI data for illustration purpose. Its comparison to a state-of-the-art approach suggests that our method is competitive regarding the uncovered neural fingerprints while offering a richer decomposition in time and space.},\n  keywords = {biomedical MRI;blood;brain;data analysis;haemodynamics;independent component analysis;maximum likelihood estimation;medical image processing;neurophysiology;principal component analysis;fMRI BOLD signal decomposition;standard methodologies;functional Magnetic Resonance Imaging data analysis;observed Blood;Level Dependent signals;voxel-wise linear model;maximum likelihood estimation;task fMRI;experimental paradigm;resting-state acquisitions;seed-voxel time-course;fMRI datasets;multivariate approaches;ICA;data-driven manner;novel low-rank model;fMRI BOLD data;convolutional sparse coding;temporal atoms which code;neural activity inducing signals;rank -1 constraint;temporal atom;spatially map its influence;joint estimation;neural signals;associated spatial maps;Functional magnetic resonance imaging;Data models;Task analysis;Neural activity;Brain modeling;Convolution;Time series analysis},\n  doi = {10.23919/EUSIPCO.2019.8902660},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531294.pdf},\n}\n\n
\n
\n\n\n
\n Standard methodologies for functional Magnetic Resonance Imaging (fMRI) data analysis decompose the observed Blood Oxygenation Level Dependent (BOLD) signals using voxel-wise linear model and perform maximum likelihood estimation to get the parameters associated with the regressors. In task fMRI, the latter are usually defined from the experimental paradigm and some confounds whereas in resting-state acquisitions, a seed-voxel time-course may be used as predictor. Nowadays, most fMRI datasets offer resting-state acquisitions, requiring multivariate approaches (e.g., PCA, ICA, etc) to extract meaningful information in a data-driven manner. Here, we propose a novel low-rank model of fMRI BOLD data but instead of considering a dimension reduction in space as in ICA, our model relies on convolutional sparse coding between the hemodynamic system and a few temporal atoms which code for the neural activity inducing signals. A rank -1 constraint is also associated with each temporal atom to spatially map its influence in the brain. Within a variational framework, the joint estimation of the neural signals and the associated spatial maps is formulated as a nonconvex optimization problem. A local minimizer is computed using an efficient alternate minimization algorithm. The proposed approach is first validated on simulations and then applied to task fMRI data for illustration purpose. Its comparison to a state-of-the-art approach suggests that our method is competitive regarding the uncovered neural fingerprints while offering a richer decomposition in time and space.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gauss-Hermite Quadrature for non-Gaussian Inference via an Importance Sampling Interpretation.\n \n \n \n \n\n\n \n Elvira, V.; Closas, P.; and Martino, L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Gauss-HermitePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902662,\n  author = {V. Elvira and P. Closas and L. Martino},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Gauss-Hermite Quadrature for non-Gaussian Inference via an Importance Sampling Interpretation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Intractable integrals appear in a plethora of problems in science and engineering. Very often, such integrals involve also a targeted distribution which is not even available in a closed form. In both cases, approximations of the integrals must be performed. Monte Carlo (MC) methods are a usual way of tackling the problem by approximating the integral with random samples. Quadrature methods are another alternative, where the integral is approximated with deterministic points and weights. However, the choice of these points and weights is only possible in a selected number of families of distributions. In this paper, we propose a deterministic method inspired in MC for approximating generic integrals. Our method is derived via an importance sampling (IS) interpretation, a MC methodology where the samples are simulated from the so-called proposal density, and weighted properly. We use Gauss-Hermite quadrature rules for Gaussian distributions, transforming them for approximating integrals with respect to generic distributions, even in the case where its normalizing constant is unknown. The novel method allows the use of several proposal distributions, allowing for the incorporation of recent advances in the multiple IS (MIS) literature. We discuss the convergence of the method, and we illustrate its performance with two numerical examples.},\n  keywords = {approximation theory;Gaussian distribution;Gaussian processes;importance sampling;Monte Carlo methods;deterministic points;deterministic method;generic integrals;MC methodology;Gauss-Hermite quadrature rules;Gaussian distributions;generic distributions;intractable integrals;targeted distribution;Monte Carlo methods;random samples;quadrature methods;sampling interpretation;Proposals;Monte Carlo methods;Gaussian distribution;Signal processing algorithms;Signal processing;Probability density function;Standards;Gauss-Hermite quadrature;importance sampling;Monte Carlo;Bayesian inference},\n  doi = {10.23919/EUSIPCO.2019.8902662},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533856.pdf},\n}\n\n
\n
\n\n\n
\n Intractable integrals appear in a plethora of problems in science and engineering. Very often, such integrals involve also a targeted distribution which is not even available in a closed form. In both cases, approximations of the integrals must be performed. Monte Carlo (MC) methods are a usual way of tackling the problem by approximating the integral with random samples. Quadrature methods are another alternative, where the integral is approximated with deterministic points and weights. However, the choice of these points and weights is only possible in a selected number of families of distributions. In this paper, we propose a deterministic method inspired in MC for approximating generic integrals. Our method is derived via an importance sampling (IS) interpretation, a MC methodology where the samples are simulated from the so-called proposal density, and weighted properly. We use Gauss-Hermite quadrature rules for Gaussian distributions, transforming them for approximating integrals with respect to generic distributions, even in the case where its normalizing constant is unknown. The novel method allows the use of several proposal distributions, allowing for the incorporation of recent advances in the multiple IS (MIS) literature. We discuss the convergence of the method, and we illustrate its performance with two numerical examples.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Integration of High Speed Train Channel Measurements in System Level Simulations.\n \n \n \n \n\n\n \n Müller, M. K.; Domınguez-Bolaño, T.; García-Naya, J. A.; Castedo, L.; and Rupp, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"IntegrationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902663,\n  author = {M. K. Müller and T. Domınguez-Bolaño and J. A. García-Naya and L. Castedo and M. Rupp},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Integration of High Speed Train Channel Measurements in System Level Simulations},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The trade-off when simulating wireless cellular networks is mainly between accuracy on the one side and simulation time and supported maximum network size on the other side. A common approach in system level (SL) simulations is to utilize various abstractions that also include employing propagation models, which can be parameterized by only a small number of values. When investigating high speed train (HST) scenarios, the distinct effects of this particular environment have to be reflected in the chosen models. In order to verify the validity of SL simulations in this context, we incorporate the results of channel measurements, obtained in a real-world train transmission scenario, in our SL simulations and compare the obtained results to those of post-processed results obtained directly through the measurement data. We show that despite of the various abstraction steps in SL simulations, average throughput results are in good accordance and general trends are preserved. Additionally, we take advantage of the more flexible SL simulations and investigate the influence of feedback delay and jitter. In a last step, we then discuss the advantages of introducing a channel quality indicator (CQI) feedback backoff, in order to improve the reliability of the connection in this context.},\n  keywords = {cellular radio;railway communication;telecommunication network reliability;wireless channels;high speed train channel measurements;system level simulations;wireless cellular networks;simulation time;maximum network size;real-world train transmission scenario;flexible SL simulations;feedback delay;channel quality indicator;CQI feedback backoff;reliability improvement;Signal to noise ratio;Throughput;Antenna measurements;Delays;Long Term Evolution;Wireless communication;Jitter;High speed trains;channel measurements;system level simulations;railway communications;wireless channels},\n  doi = {10.23919/EUSIPCO.2019.8902663},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570526682.pdf},\n}\n\n
\n
\n\n\n
\n The trade-off when simulating wireless cellular networks is mainly between accuracy on the one side and simulation time and supported maximum network size on the other side. A common approach in system level (SL) simulations is to utilize various abstractions that also include employing propagation models, which can be parameterized by only a small number of values. When investigating high speed train (HST) scenarios, the distinct effects of this particular environment have to be reflected in the chosen models. In order to verify the validity of SL simulations in this context, we incorporate the results of channel measurements, obtained in a real-world train transmission scenario, in our SL simulations and compare the obtained results to those of post-processed results obtained directly through the measurement data. We show that despite of the various abstraction steps in SL simulations, average throughput results are in good accordance and general trends are preserved. Additionally, we take advantage of the more flexible SL simulations and investigate the influence of feedback delay and jitter. In a last step, we then discuss the advantages of introducing a channel quality indicator (CQI) feedback backoff, in order to improve the reliability of the connection in this context.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Visual Quality Analysis of Judder Effect on Head Mounted Displays.\n \n \n \n \n\n\n \n Mahmoudpour, S.; and Schelkens, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"VisualPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902665,\n  author = {S. Mahmoudpour and P. Schelkens},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Visual Quality Analysis of Judder Effect on Head Mounted Displays},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The extended field of view (FoV) offered by head mounted displays (HMD) increases the immersive experience, but it also introduces new visual quality challenges to be addressed. The judder artefact is a quality degradation factor that appears during object tracking and it is caused by eye movement relative to the display. As a first attempt to investigate the negative effect of judder in wide FoV applications, we built a new dataset of omnidirectional videos at different bitrates and judder severity levels. Two subjective tests were conducted to assess the quality in terms of perceived severity of judder in compressed video sequences. The outcomes provide new findings about the effect of judder on human perception. The results also give further understanding about the interaction between presentation quality and judder that can be utilized for developing objective models (c) to predict quality degradation in presence of judder.},\n  keywords = {data compression;eye;helmet mounted displays;image motion analysis;image sequences;object tracking;video coding;video signal processing;subjective tests;compressed video sequences;visual quality analysis;judder effect;head mounted displays;HMD;visual quality challenges;judder artefact;quality degradation factor;object tracking;eye movement;omnidirectional videos;judder severity levels;FoV applications;Videos;Video sequences;Visualization;Object tracking;Target tracking;Bit rate;Quality of experience;omnidirectional video;visual quality;judder;field of view;object tracking},\n  doi = {10.23919/EUSIPCO.2019.8902665},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532689.pdf},\n}\n\n
\n
\n\n\n
\n The extended field of view (FoV) offered by head mounted displays (HMD) increases the immersive experience, but it also introduces new visual quality challenges to be addressed. The judder artefact is a quality degradation factor that appears during object tracking and it is caused by eye movement relative to the display. As a first attempt to investigate the negative effect of judder in wide FoV applications, we built a new dataset of omnidirectional videos at different bitrates and judder severity levels. Two subjective tests were conducted to assess the quality in terms of perceived severity of judder in compressed video sequences. The outcomes provide new findings about the effect of judder on human perception. The results also give further understanding about the interaction between presentation quality and judder that can be utilized for developing objective models (c) to predict quality degradation in presence of judder.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian time-domain multiple sound source localization for a stochastic machine.\n \n \n \n \n\n\n \n Frisch, R.; Faix, M.; Droulez, J.; Girin, L.; and Mazer, E.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902666,\n  author = {R. Frisch and M. Faix and J. Droulez and L. Girin and E. Mazer},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian time-domain multiple sound source localization for a stochastic machine},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We propose a time-domain multiple sound source localization (SSL) method based on Bayesian inference. This method is specifically designed to run on the stochastic machines (SM) that we are currently developing to perform efficient low-level sensor signal processing with ultra-low power consumption. The proposed SSL method is divided into two main parts. First, a probabilistic model is run on 50 very short time frames (3. 75ms each) of multichannel recorded signals. Second, the results obtained on the different frames are fused to obtain a final localization map. Using the system in a supervised way allows to extract estimated source locations by selecting as many maxima as there are sources in the room. We explain how this method is implemented on a SM. Experiments are presented to illustrate the performance and robustness of the resulting system.},\n  keywords = {Bayes methods;inference mechanisms;sensors;signal processing;stochastic processes;stochastic machine;ultra-low power consumption;SSL method;multichannel recorded signals;estimated source locations;Bayesian time-domain multiple sound source localization;Bayesian inference;low-level sensor signal processing;Microphones;Stochastic processes;Position measurement;Bayes methods;Time-domain analysis;Probabilistic logic;Signal processing;Multiple sound source localization;time-domain processing;Bayesian stochastic machine;specific hardware},\n  doi = {10.23919/EUSIPCO.2019.8902666},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529514.pdf},\n}\n\n
\n
\n\n\n
\n We propose a time-domain multiple sound source localization (SSL) method based on Bayesian inference. This method is specifically designed to run on the stochastic machines (SM) that we are currently developing to perform efficient low-level sensor signal processing with ultra-low power consumption. The proposed SSL method is divided into two main parts. First, a probabilistic model is run on 50 very short time frames (3. 75ms each) of multichannel recorded signals. Second, the results obtained on the different frames are fused to obtain a final localization map. Using the system in a supervised way allows to extract estimated source locations by selecting as many maxima as there are sources in the room. We explain how this method is implemented on a SM. Experiments are presented to illustrate the performance and robustness of the resulting system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n TV-CAR speech analysis based on Regularized LP.\n \n \n \n \n\n\n \n Funaki, K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TV-CARPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902667,\n  author = {K. Funaki},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {TV-CAR speech analysis based on Regularized LP},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Linear Prediction (LP) analysis is speech analysis to estimate AR (Auto-Regressive) coefficients to represent the all-pole spectrum that is applied in speech synthesis recently besides speech coding. We have proposed l2-norm optimization based TV-CAR (Time-Varying Complex AR) speech analysis for an analytic signal, MMSE (Minimizing Mean Square Error) or ELS (Extended Least Square) method, and we have applied them into the speech processing such as robust ASR or F0 estimation of speech. On the other hand, B.Kleijn et al. have proposed Regularized Linear Prediction (RLP) method to suppress pitch related bias that is an overestimation of the first formant. In the RLP, l2-norm regularized term that is the norm of spectral changes in the frequencies is introduced to suppress the rapid spectral changes. The RLP estimates the parameter so as to minimize l2-norm criterion added by the l2-norm regularized penalty term. In this paper, the RLP-based TV-CAR speech analysis is proposed and evaluated with the F0 estimation of speech using IRAPT (Instantaneous RAPT) with Keele Pitch Database under noisy conditions.},\n  keywords = {correlation methods;frequency estimation;least mean squares methods;speech coding;speech recognition;speech synthesis;time-varying systems;linear prediction analysis;Auto-Regressive;all-pole spectrum;speech synthesis;speech coding;mean square error minimization;0 estimation;Regularized Linear Prediction method;RLP-based TV-CAR speech analysis;time-varying complex AR;l2norm optimization based TV-CAR;Estimation;Mathematical model;Frequency estimation;Speech analysis;Analytical models;Speech coding;Speech processing;Time-Varying Complex AR (TV-CAR) analysis;Analytic signal;l2-norm regularization;F0 estimation of speech},\n  doi = {10.23919/EUSIPCO.2019.8902667},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533023.pdf},\n}\n\n
\n
\n\n\n
\n Linear Prediction (LP) analysis is speech analysis to estimate AR (Auto-Regressive) coefficients to represent the all-pole spectrum that is applied in speech synthesis recently besides speech coding. We have proposed l2-norm optimization based TV-CAR (Time-Varying Complex AR) speech analysis for an analytic signal, MMSE (Minimizing Mean Square Error) or ELS (Extended Least Square) method, and we have applied them into the speech processing such as robust ASR or F0 estimation of speech. On the other hand, B.Kleijn et al. have proposed Regularized Linear Prediction (RLP) method to suppress pitch related bias that is an overestimation of the first formant. In the RLP, l2-norm regularized term that is the norm of spectral changes in the frequencies is introduced to suppress the rapid spectral changes. The RLP estimates the parameter so as to minimize l2-norm criterion added by the l2-norm regularized penalty term. In this paper, the RLP-based TV-CAR speech analysis is proposed and evaluated with the F0 estimation of speech using IRAPT (Instantaneous RAPT) with Keele Pitch Database under noisy conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Region-Based Relaxations To Accelerate Greedy Approaches.\n \n \n \n \n\n\n \n Dorffer, C.; Herzet, C.; and Drémeau, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Region-BasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902669,\n  author = {C. Dorffer and C. Herzet and A. Drémeau},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Region-Based Relaxations To Accelerate Greedy Approaches},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We propose a low-computational strategy for the efficient implementation of the “atom selection step” in sparse representation algorithms. The proposed procedure is based on simple tests enabling to identify subsets of atoms which cannot be selected. Our procedure applies on both discrete or continuous dictionaries. Experiments performed on the standard “Gaussian deconvolution” problem show the computational gain induced by the proposed approach.},\n  keywords = {deconvolution;Gaussian processes;greedy algorithms;signal representation;region-based relaxations;greedy approaches;low-computational strategy;atom selection step;sparse representation algorithms;discrete dictionaries;continuous dictionaries;standard Gaussian deconvolution problem;Dictionaries;Signal processing algorithms;Upper bound;Complexity theory;Atomic measurements;Europe;Signal processing;Sparse approximation;atom selection;low-complexity methods.},\n  doi = {10.23919/EUSIPCO.2019.8902669},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533355.pdf},\n}\n\n
\n
\n\n\n
\n We propose a low-computational strategy for the efficient implementation of the “atom selection step” in sparse representation algorithms. The proposed procedure is based on simple tests enabling to identify subsets of atoms which cannot be selected. Our procedure applies on both discrete or continuous dictionaries. Experiments performed on the standard “Gaussian deconvolution” problem show the computational gain induced by the proposed approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cross-domain Knowledge Transfer Schemes for 3D Human Action Recognition.\n \n \n \n \n\n\n \n Psaltis, A.; Papadopoulos, G. T.; and Daras, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Cross-domainPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902670,\n  author = {A. Psaltis and G. T. Papadopoulos and P. Daras},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Cross-domain Knowledge Transfer Schemes for 3D Human Action Recognition},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Previous work in 3D human action recognition has been mainly confined to schemes in a single domain, exploiting in principle skeleton-tracking data, due to their compact representation and efficient modeling of the observed motion dynamics. However, in order to extend and adapt the learning process to multi-modal domains, inevitably the focus needs also to be put on cross-domain analysis. On the other hand, attention schemes, which have lately been applied to numerous application cases and exhibited promising results, can exploit the intra-affinity of the considered modalities and can then be used for performing intra-modality knowledge transfer, e.g. to transfer domain-specific knowledge of the skeleton modality to the flow one and vice verca. This study investigates novel cross-modal attention-based strategies to efficiently model global contextual information regarding the action dynamics, aiming to contribute towards increased overall recognition performance. In particular, a new methodology for transferring knowledge across domains is introduced, by taking advantage of the increased temporal modeling capabilities of Long Short Term Memory (LSTM) models. Additionally, extensive experiments and thorough comparative evaluation provide a detailed analysis of the problem at hand and demonstrate the particular characteristics of the involved attention-enhanced schemes. The overall proposed approach achieves state-of-the-art performance in the currently most challenging public dataset, namely the NTU RGB-D one, surpassing similar uni/multi-modal representation schemes.},\n  keywords = {image colour analysis;image motion analysis;image recognition;image representation;learning (artificial intelligence);recurrent neural nets;domain-specific knowledge;skeleton modality;cross-modal attention-based strategies;action dynamics;temporal modeling capabilities;long short term memory models;cross-domain knowledge transfer schemes;3D human action recognition;principle skeleton-tracking data;motion dynamics;cross-domain analysis;attention schemes;attention-enhanced schemes;Three-dimensional displays;Adaptation models;Logic gates;Skeleton;Knowledge transfer;Solid modeling;Action recognition;attention schemes;deep learning},\n  doi = {10.23919/EUSIPCO.2019.8902670},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529321.pdf},\n}\n\n
\n
\n\n\n
\n Previous work in 3D human action recognition has been mainly confined to schemes in a single domain, exploiting in principle skeleton-tracking data, due to their compact representation and efficient modeling of the observed motion dynamics. However, in order to extend and adapt the learning process to multi-modal domains, inevitably the focus needs also to be put on cross-domain analysis. On the other hand, attention schemes, which have lately been applied to numerous application cases and exhibited promising results, can exploit the intra-affinity of the considered modalities and can then be used for performing intra-modality knowledge transfer, e.g. to transfer domain-specific knowledge of the skeleton modality to the flow one and vice verca. This study investigates novel cross-modal attention-based strategies to efficiently model global contextual information regarding the action dynamics, aiming to contribute towards increased overall recognition performance. In particular, a new methodology for transferring knowledge across domains is introduced, by taking advantage of the increased temporal modeling capabilities of Long Short Term Memory (LSTM) models. Additionally, extensive experiments and thorough comparative evaluation provide a detailed analysis of the problem at hand and demonstrate the particular characteristics of the involved attention-enhanced schemes. The overall proposed approach achieves state-of-the-art performance in the currently most challenging public dataset, namely the NTU RGB-D one, surpassing similar uni/multi-modal representation schemes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Variational Inference for DOA Estimation in Reverberant Conditions.\n \n \n \n \n\n\n \n Soussana, Y.; and Gannot, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"VariationalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902672,\n  author = {Y. Soussana and S. Gannot},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Variational Inference for DOA Estimation in Reverberant Conditions},\n  year = {2019},\n  pages = {1-5},\n  abstract = {A concurrent speaker direction of arrival (DOA) estimator in a reverberant environment is presented. The reverberation phenomenon, if not properly addressed, is known to degrade the performance of DOA estimators. In this paper, we investigate a variational Bayesian (VB) inference framework for clustering time-frequency (TF) bins to candidate angles. The received microphone signals are modelled as a sum of anechoic speech and the reverberation component. Our model relies on Gaussian prior for the speech signal and Gamma prior for the speech precision. The noise covariance matrix is modelled by a time-invariant full-rank coherence matrix multiplied by time-varying gain with Gamma prior as well. The benefits of the presented model are verified in a simulation study using measured room impulse responses.},\n  keywords = {acoustic signal processing;array signal processing;Bayes methods;covariance matrices;direction-of-arrival estimation;interference (signal);microphones;reverberation;speech processing;transient response;variational inference;DOA estimation;reverberant conditions;concurrent speaker direction;arrival estimator;reverberant environment;reverberation phenomenon;DOA estimators;variational Bayesian inference framework;time-frequency bins;candidate angles;microphone signals;anechoic speech;reverberation component;speech signal;speech precision;noise covariance matrix;time-invariant full-rank coherence matrix;time-varying gain;Microphones;Direction-of-arrival estimation;Reverberation;Estimation;Bayes methods;Time-frequency analysis;DOA estimation;Variational Bayes inference;Variational Expectation-Maximization},\n  doi = {10.23919/EUSIPCO.2019.8902672},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528096.pdf},\n}\n\n
\n
\n\n\n
\n A concurrent speaker direction of arrival (DOA) estimator in a reverberant environment is presented. The reverberation phenomenon, if not properly addressed, is known to degrade the performance of DOA estimators. In this paper, we investigate a variational Bayesian (VB) inference framework for clustering time-frequency (TF) bins to candidate angles. The received microphone signals are modelled as a sum of anechoic speech and the reverberation component. Our model relies on Gaussian prior for the speech signal and Gamma prior for the speech precision. The noise covariance matrix is modelled by a time-invariant full-rank coherence matrix multiplied by time-varying gain with Gamma prior as well. The benefits of the presented model are verified in a simulation study using measured room impulse responses.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian Model Selection for Nonlinear Acoustic Echo Cancellation.\n \n \n \n \n\n\n \n Halimeh, M. M.; Brendel, A.; and Kellermann, W.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902673,\n  author = {M. M. Halimeh and A. Brendel and W. Kellermann},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian Model Selection for Nonlinear Acoustic Echo Cancellation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we introduce a Bayesian framework to perform model selection for nonlinear acoustic echo cancellation. This is especially important for scenarios where the functional form of the underlying nonlinear distortion is time-varying and/or is unknown, e.g., nonlinear distortions that vary with the volume level of the loudspeakers. To this end, the proposed method evaluates the model probabilities, or what is known as the evidence density, in a Bayesian manner. Thus, unlike convex and affine combination schemes of adaptive filters, the proposed method optimizes both the model complexity as well as the model performance by a single criterion. Moreover, by using the significance-aware principle, the proposed framework is realized in a computationally efficient way. The method is validated by three experiments using synthesized time-invariant nonlinearities, synthesized time-varying nonlinearities, and using real recorded nonlinearities.},\n  keywords = {acoustic signal processing;adaptive filters;Bayes methods;echo suppression;loudspeakers;nonlinear acoustics;probability;synthesized time-invariant nonlinearities;synthesized time-varying nonlinearities;recorded nonlinearities;Bayesian model selection;nonlinear acoustic echo cancellation;Bayesian framework;underlying nonlinear distortion;nonlinear distortions;model probabilities;Bayesian manner;affine combination schemes;model complexity;Nonlinear distortion;Adaptation models;Microphones;Bayes methods;Echo cancellers;Computational modeling},\n  doi = {10.23919/EUSIPCO.2019.8902673},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529796.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we introduce a Bayesian framework to perform model selection for nonlinear acoustic echo cancellation. This is especially important for scenarios where the functional form of the underlying nonlinear distortion is time-varying and/or is unknown, e.g., nonlinear distortions that vary with the volume level of the loudspeakers. To this end, the proposed method evaluates the model probabilities, or what is known as the evidence density, in a Bayesian manner. Thus, unlike convex and affine combination schemes of adaptive filters, the proposed method optimizes both the model complexity as well as the model performance by a single criterion. Moreover, by using the significance-aware principle, the proposed framework is realized in a computationally efficient way. The method is validated by three experiments using synthesized time-invariant nonlinearities, synthesized time-varying nonlinearities, and using real recorded nonlinearities.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n BEV Object Tracking for LIDAR-based Ground Truth Generation.\n \n \n \n \n\n\n \n Montero, D.; Aranjuelo, N.; Senderos, O.; and Nieto, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"BEVPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902674,\n  author = {D. Montero and N. Aranjuelo and O. Senderos and M. Nieto},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {BEV Object Tracking for LIDAR-based Ground Truth Generation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Building ADAS (Advanced Driver Assistance Systems) or AD (Autonomous Driving) vehicles implies the acquisition of large volumes of data and a costly annotation process to create labeled metadata. Labels are then used for either ground truth composition (for test and validation of algorithms) or to set-up training datasets for machine learning processes. In this paper we present a 3D object tracking mechanism that operates on detections from point cloud sequences. It works in two steps: first an online phase which runs a Branch and Bound algorithm (BBA) to solve the association between detections and tracks, and a second filtering step which adds the required temporal smoothness. Results on KITTI dataset show the produced tracks are accurate and robust against noisy and missing detections, as produced by state-of-the-art deep learning detectors.},\n  keywords = {driver information systems;image sequences;learning (artificial intelligence);object detection;object tracking;optical radar;machine learning processes;3D object tracking mechanism;point cloud sequences;BEV object tracking;LIDAR-based ground truth generation;ADAS;autonomous driving;labeled metadata;advanced driver assistance systems;Three-dimensional displays;Annotations;Training;Detectors;Laser radar;Tools;Europe},\n  doi = {10.23919/EUSIPCO.2019.8902674},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533748.pdf},\n}\n\n
\n
\n\n\n
\n Building ADAS (Advanced Driver Assistance Systems) or AD (Autonomous Driving) vehicles implies the acquisition of large volumes of data and a costly annotation process to create labeled metadata. Labels are then used for either ground truth composition (for test and validation of algorithms) or to set-up training datasets for machine learning processes. In this paper we present a 3D object tracking mechanism that operates on detections from point cloud sequences. It works in two steps: first an online phase which runs a Branch and Bound algorithm (BBA) to solve the association between detections and tracks, and a second filtering step which adds the required temporal smoothness. Results on KITTI dataset show the produced tracks are accurate and robust against noisy and missing detections, as produced by state-of-the-art deep learning detectors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Wavelength Proportional Arrangement of Virtual Microphones Based on Interpolation/Extrapolation for Underdetermined Speech Enhancement.\n \n \n \n \n\n\n \n Jinzai, R.; Yamaoka, K.; Matsumoto, M.; Makino, S.; and Yamada, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"WavelengthPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902675,\n  author = {R. Jinzai and K. Yamaoka and M. Matsumoto and S. Makino and T. Yamada},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Wavelength Proportional Arrangement of Virtual Microphones Based on Interpolation/Extrapolation for Underdetermined Speech Enhancement},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We previously proposed the virtual microphone technique to improve the speech enhancement performance in underdetermined situations, in which the number of channels is virtually increased by estimating extra observed signals at arbitrary positions along the straight line formed by real microphones. In our previous work, the effectiveness of the interpolation of virtual microphone signals for speech enhancement was experimentally confirmed. In this study, to examine the effectiveness of the extrapolation of a virtual microphone in improving the speech enhancement performance, we apply this technique to speech enhancement using a maximum signal-to-noise ratio (SNR) beamformer. Next, to improve the speech enhancement performance on the basis of the virtual microphone technique, we propose a new arrangement where a virtual microphone is placed in a position proportional to the wavelength. From the results of an experiment in an underdetermined situation, we confirmed that the proposed method markedly improves speech enhancement performance. Moreover, we present directivity patterns to confirm the behavior of each method of positioning the virtual microphone.},\n  keywords = {acoustic signal processing;array signal processing;blind source separation;extrapolation;interpolation;microphone arrays;microphones;speech enhancement;speech enhancement performance;virtual microphone technique;underdetermined situation;wavelength proportional arrangement;virtual microphone signals;signal-to-noise ratio beamformer;directivity patterns;Microphones;Interpolation;Extrapolation;Speech enhancement;Signal to noise ratio;Interference;array signal processing;virtual microphone;speech enhancement;underdetermined situation;beamforming},\n  doi = {10.23919/EUSIPCO.2019.8902675},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531830.pdf},\n}\n\n
\n
\n\n\n
\n We previously proposed the virtual microphone technique to improve the speech enhancement performance in underdetermined situations, in which the number of channels is virtually increased by estimating extra observed signals at arbitrary positions along the straight line formed by real microphones. In our previous work, the effectiveness of the interpolation of virtual microphone signals for speech enhancement was experimentally confirmed. In this study, to examine the effectiveness of the extrapolation of a virtual microphone in improving the speech enhancement performance, we apply this technique to speech enhancement using a maximum signal-to-noise ratio (SNR) beamformer. Next, to improve the speech enhancement performance on the basis of the virtual microphone technique, we propose a new arrangement where a virtual microphone is placed in a position proportional to the wavelength. From the results of an experiment in an underdetermined situation, we confirmed that the proposed method markedly improves speech enhancement performance. Moreover, we present directivity patterns to confirm the behavior of each method of positioning the virtual microphone.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Recursive LCMVEs with Non-Stationary Constraints and Partially Coherent Signal Sources.\n \n \n \n \n\n\n \n Chaumette, E.; Vilà-Valls, J.; Vincent, F.; and Closas, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RecursivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902676,\n  author = {E. Chaumette and J. {Vilà-Valls} and F. Vincent and P. Closas},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Recursive LCMVEs with Non-Stationary Constraints and Partially Coherent Signal Sources},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In estimating an unknown parameter vector in a linear regression model, it is common to use linearly constrained minimum variance estimators (LCMVEs). For a long time, LCMVEs were studied in the context of stationary constraints under both stationary or non-stationary environments. Recently, a new family of non-stationary constraints leading to Kalman-like recursive LCMVEs has been introduced for fully coherent signal (FCS) sources. A noteworthy feature of this family is to allow the possibility of, at each new observation, incorporating new constraints. This article extends these results to the case of partially coherent signal (PCS) sources. Indeed, without ad hoc modifications of the Kalman-like recursion, estimation of the amplitudes of PCS sources exhibit a performance breakdown even for a slight loss of coherence. Last but not least, it is shown that PCS sources introduce a lower limit in the achievable performance in the large sample regime.},\n  keywords = {array signal processing;direction-of-arrival estimation;parameter estimation;regression analysis;PCS sources;nonstationary constraints;partially coherent signal sources;unknown parameter vector;linear regression model;linearly constrained minimum variance estimators;stationary constraints;nonstationary environments;fully coherent signal sources;Kalman-like recursion;recursive LCMVE;Lead;Estimation;Electric breakdown;Noise measurement;Covariance matrices;Parametric statistics;Stochastic processes},\n  doi = {10.23919/EUSIPCO.2019.8902676},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531285.pdf},\n}\n\n
\n
\n\n\n
\n In estimating an unknown parameter vector in a linear regression model, it is common to use linearly constrained minimum variance estimators (LCMVEs). For a long time, LCMVEs were studied in the context of stationary constraints under both stationary or non-stationary environments. Recently, a new family of non-stationary constraints leading to Kalman-like recursive LCMVEs has been introduced for fully coherent signal (FCS) sources. A noteworthy feature of this family is to allow the possibility of, at each new observation, incorporating new constraints. This article extends these results to the case of partially coherent signal (PCS) sources. Indeed, without ad hoc modifications of the Kalman-like recursion, estimation of the amplitudes of PCS sources exhibit a performance breakdown even for a slight loss of coherence. Last but not least, it is shown that PCS sources introduce a lower limit in the achievable performance in the large sample regime.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Low Complexity Image Compression Algorithm for IoT Multimedia Applications.\n \n \n \n \n\n\n \n Campobello, G.; and Segreto, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902678,\n  author = {G. Campobello and A. Segreto},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Low Complexity Image Compression Algorithm for IoT Multimedia Applications},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper focuses on a novel lossless image compression algorithm which can be efficiently used for Internet of Things (IoT) multimedia applications. The proposed algorithm has low memory requirements and relies on a simple and efficient encoding scheme. Thus it can be easily implemented even in low-cost microcontrollers as those commonly used in several IoT platforms. Despite its simplicity, comparison results on different image datasets show that the proposed algorithm achieves compression ratios comparable with other more complex state-of-the-art solutions.},\n  keywords = {computational complexity;data compression;encoding;image coding;microcontrollers;multimedia computing;low memory requirements;simple encoding scheme;Internet of Things multimedia applications;IoT multimedia applications;low complexity image compression algorithm;compression ratios;image datasets;IoT platforms;low-cost microcontrollers;Image coding;Signal processing algorithms;Compression algorithms;Prediction algorithms;Channel coding;Internet of Things;Gray-scale},\n  doi = {10.23919/EUSIPCO.2019.8902678},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533486.pdf},\n}\n\n
\n
\n\n\n
\n This paper focuses on a novel lossless image compression algorithm which can be efficiently used for Internet of Things (IoT) multimedia applications. The proposed algorithm has low memory requirements and relies on a simple and efficient encoding scheme. Thus it can be easily implemented even in low-cost microcontrollers as those commonly used in several IoT platforms. Despite its simplicity, comparison results on different image datasets show that the proposed algorithm achieves compression ratios comparable with other more complex state-of-the-art solutions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Waveform optimization for FDA Radar.\n \n \n \n \n\n\n \n Rubinshtein, N.; and Tabrikian, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"WaveformPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902679,\n  author = {N. Rubinshtein and J. Tabrikian},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Waveform optimization for FDA Radar},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper addresses the problem of transmit signal design for target localization in a frequency diverse array (FDA) radar. For this purpose, we derive the Cramér-Rao bound (CRB) for target localization in FDA radar. The derived CRB is optimized with respect to the transmit signal parameters. It is commonly assumed that in radar systems the direction-of-arrival (DOA) estimation accuracy and resolution are determined by the transmit-receive array aperture. In FDA radar, a coupling between range and DOA estimation is generated. Using the derived CRB, we show that in FDA radar one is able to improve the DOA estimation accuracy and resolution by increasing the transmit signal bandwidth. The target localization performance is analyzed theoretically and via simulations, and it is shown that using the proposed approach for transmit signal optimization, results in superior target localization performance compared to conventional methods.},\n  keywords = {array signal processing;direction-of-arrival estimation;radar signal processing;waveform optimization;transmit signal design;frequency diverse array radar;Cramér-Rao bound;derived CRB;transmit signal parameters;radar systems;transmit-receive array aperture;FDA radar one;transmit signal bandwidth;transmit signal optimization;superior target localization performance;Optimization;Estimation;Direction-of-arrival estimation;Bandwidth;MIMO radar;Array signal processing;Frequency diverse array (FDA);Cramér-Rao bound (CRB);waveform optimization;MIMO radar},\n  doi = {10.23919/EUSIPCO.2019.8902679},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529509.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the problem of transmit signal design for target localization in a frequency diverse array (FDA) radar. For this purpose, we derive the Cramér-Rao bound (CRB) for target localization in FDA radar. The derived CRB is optimized with respect to the transmit signal parameters. It is commonly assumed that in radar systems the direction-of-arrival (DOA) estimation accuracy and resolution are determined by the transmit-receive array aperture. In FDA radar, a coupling between range and DOA estimation is generated. Using the derived CRB, we show that in FDA radar one is able to improve the DOA estimation accuracy and resolution by increasing the transmit signal bandwidth. The target localization performance is analyzed theoretically and via simulations, and it is shown that using the proposed approach for transmit signal optimization, results in superior target localization performance compared to conventional methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Data Preprocessing for ANN-based Industrial Time-Series Forecasting with Imbalanced Data.\n \n \n \n \n\n\n \n Pisa, I.; Santín, I.; Vicario, J. L.; Morell, A.; and Vilanova, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DataPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902682,\n  author = {I. Pisa and I. Santín and J. L. Vicario and A. Morell and R. Vilanova},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Data Preprocessing for ANN-based Industrial Time-Series Forecasting with Imbalanced Data},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The evolution of Industry towards the 4.0 paradigm has motivated the adoption of Artificial Neural Networks (ANNs) to deal with applications where predictive and maintenance tasks are performed. These tasks become difficult to carry out when rare events are present due to the imbalance of data. This is because training of ANN can be biased. Conventional techniques addressing this problem are mainly based on resampling-based approaches. However, these are not always feasible when dealing with time-series forecasting tasks in industrial scenarios. For that reason, this work proposes the application of data preprocessing techniques especially designed to face this scenario, a problem which has not been covered enough in the state-of-the-art. Considered techniques are applied over time-series data coming from Wastewater Treatment Plants (WWTPs). Our proposal significantly outperforms current strategies showing a 68% of improvement in terms of RMSE when rare events are addressed.},\n  keywords = {data handling;forecasting theory;neural nets;production engineering computing;time series;wastewater treatment;data preprocessing;ANN-based industrial time-series forecasting;imbalanced data;artificial neural networks;predictive maintenance tasks;resampling-based approaches;time-series forecasting tasks;industrial scenarios;time-series data;wastewater treatment plants;WWTPs;Data preprocessing;Task analysis;Europe;Sensors;Correlation;Training;Pollution measurement;Data preprocessing;Imbalanced data;Rare events;Artificial Neural Network},\n  doi = {10.23919/EUSIPCO.2019.8902682},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528816.pdf},\n}\n\n
\n
\n\n\n
\n The evolution of Industry towards the 4.0 paradigm has motivated the adoption of Artificial Neural Networks (ANNs) to deal with applications where predictive and maintenance tasks are performed. These tasks become difficult to carry out when rare events are present due to the imbalance of data. This is because training of ANN can be biased. Conventional techniques addressing this problem are mainly based on resampling-based approaches. However, these are not always feasible when dealing with time-series forecasting tasks in industrial scenarios. For that reason, this work proposes the application of data preprocessing techniques especially designed to face this scenario, a problem which has not been covered enough in the state-of-the-art. Considered techniques are applied over time-series data coming from Wastewater Treatment Plants (WWTPs). Our proposal significantly outperforms current strategies showing a 68% of improvement in terms of RMSE when rare events are addressed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gearbox Fault Diagnosis Using Convolutional Neural Networks And Support Vector Machines.\n \n \n \n \n\n\n \n Chen, Z.; Liu, C.; Gryllias, K.; and Li, W.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"GearboxPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902686,\n  author = {Z. Chen and C. Liu and K. Gryllias and W. Li},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Gearbox Fault Diagnosis Using Convolutional Neural Networks And Support Vector Machines},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Fast and accurate fault diagnosis is important to ensure the reliability and the operation safety of rotating machinery, which is often based on vibration analysis. In this paper, a novel approach combining Convolutional Neural Networks (CNN) and a Support Vector Machine (SVM) classifier is proposed, in order not only to leverage upon the advantages of deep discriminative features (learnt by the CNN) but also to exploit the generalization performance of SVM classifiers. Firstly, the Continuous Wavelet Transform (CWT) is employed to obtain the pre-processed representations of raw vibration signals. Then a novel CNN with a square-pooling architecture is built to extract high-level features, without requiring extra training and fine-tuning and thus demanding reduced computation cost. Finally, a SVM is used as classifier to conduct the fault classification. Experiments are conducted on a dataset collected from a gearbox. The results demonstrate that the proposed method achieves competitive results compared to other algorithms in terms of computational cost and accuracy.},\n  keywords = {convolutional neural nets;fault diagnosis;feature extraction;gears;neural net architecture;reliability;support vector machines;vibrational signal processing;vibrations;wavelet transforms;vibration analysis;convolutional neural networks;support vector machine classifier;SVM classifiers;continuous wavelet transform;CNN;square-pooling architecture;fault classification;gearbox fault diagnosis;operation safety;reliability;rotating machinery;feature extraction;Support vector machines;Feature extraction;Fault diagnosis;Convolution;Computer architecture;Vibrations;Wavelet transforms;Fault diagnosis;CNN;SVM;Gearboxes;Wavelets},\n  doi = {10.23919/EUSIPCO.2019.8902686},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533987.pdf},\n}\n\n
\n
\n\n\n
\n Fast and accurate fault diagnosis is important to ensure the reliability and the operation safety of rotating machinery, which is often based on vibration analysis. In this paper, a novel approach combining Convolutional Neural Networks (CNN) and a Support Vector Machine (SVM) classifier is proposed, in order not only to leverage upon the advantages of deep discriminative features (learnt by the CNN) but also to exploit the generalization performance of SVM classifiers. Firstly, the Continuous Wavelet Transform (CWT) is employed to obtain the pre-processed representations of raw vibration signals. Then a novel CNN with a square-pooling architecture is built to extract high-level features, without requiring extra training and fine-tuning and thus demanding reduced computation cost. Finally, a SVM is used as classifier to conduct the fault classification. Experiments are conducted on a dataset collected from a gearbox. The results demonstrate that the proposed method achieves competitive results compared to other algorithms in terms of computational cost and accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Switch-Based Hybrid Precoding in mmWave Massive MIMO Systems.\n \n \n \n \n\n\n \n Nosrati, H.; Aboutanios, E.; Smith, D.; and wang , X.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Switch-BasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902688,\n  author = {H. Nosrati and E. Aboutanios and D. Smith and X. wang},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Switch-Based Hybrid Precoding in mmWave Massive MIMO Systems},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In mmWave communications, large-scale arrays can be very advantageous. In such arrays, switch-based hybrid precoding (beamforming) is very promising in terms of energy efficiency and reduced complexity, as opposed to phase-shifter structures for beamforming. However, switch-based structures are binary, which means that the design of an optimum beamformer, at large-scale in the analog domain, is a difficult task. We address this problem and propose a new method for the design of a switch-based hybrid precoder for massive MIMO communications in mmWave bands. We first cast the relevant maximization of mutual information as a binary, rank-constrained quadratic maximization, and solve it iteratively for each column of the analog precoder. The solution is then effectively approximated via a set of relaxations and sequential convex programming (SCP). Finally, we show the feasibility, and effectiveness of our method via numerical results.},\n  keywords = {array signal processing;convex programming;iterative methods;millimetre wave communication;MIMO communication;precoding;telecommunication switching;mmWave massive MIMO systems;mmWave communications;large-scale arrays;switch-based hybrid precoding;beamforming;switch-based structures;optimum beamformer;switch-based hybrid precoder;massive MIMO communications;mmWave bands;analog precoder;Radio frequency;Matrix decomposition;Array signal processing;Massive MIMO;Precoding;Switches;Mutual information;Hybrid beamforming;Precoding;Millimeter wave communications;Massive MIMO.},\n  doi = {10.23919/EUSIPCO.2019.8902688},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533518.pdf},\n}\n\n
\n
\n\n\n
\n In mmWave communications, large-scale arrays can be very advantageous. In such arrays, switch-based hybrid precoding (beamforming) is very promising in terms of energy efficiency and reduced complexity, as opposed to phase-shifter structures for beamforming. However, switch-based structures are binary, which means that the design of an optimum beamformer, at large-scale in the analog domain, is a difficult task. We address this problem and propose a new method for the design of a switch-based hybrid precoder for massive MIMO communications in mmWave bands. We first cast the relevant maximization of mutual information as a binary, rank-constrained quadratic maximization, and solve it iteratively for each column of the analog precoder. The solution is then effectively approximated via a set of relaxations and sequential convex programming (SCP). Finally, we show the feasibility, and effectiveness of our method via numerical results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reproducibility of Deep CNN for Biomedical Image Processing Across Frameworks and Architectures.\n \n \n \n \n\n\n \n Marrone, S.; Olivieri, S.; Piantadosi, G.; and Sansone, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ReproducibilityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902690,\n  author = {S. Marrone and S. Olivieri and G. Piantadosi and C. Sansone},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Reproducibility of Deep CNN for Biomedical Image Processing Across Frameworks and Architectures},\n  year = {2019},\n  pages = {1-5},\n  abstract = {With the increasing spread of easy and effective frameworks, in recent years Deep Learning approaches are becoming more and more used in several application fields, including computer vision (such as natural and biomedical image processing), automatic speech recognition (ASR) and time-series analysis. If, on one hand, the availability of such frameworks allows developers to use the one they feel more comfortable with, on the other, it raises questions related to the reproducibility of the designed model across different hardware and software configurations, both at training and at inference times. The reproducibility assessment is important to determine if the resulting model produces good or bad outcomes just because of luckier or blunter environmental training conditions. This is a non-trivial problem for Deep Learning based applications, not only because their training and optimization phases strongly rely on stochastic procedures, but also because of the use of some heuristic considerations (mainly speculative procedures) at training time that, although they help in reducing the required computational effort, tend to introduce non-deterministic behavior, with a direct impact on the results and on the model's reproducibility. Usually, to face this problem, designers make use of probabilistic considerations about the distribution of data or focus their attention on very huge datasets. However, this kind of approach does not really fit some application field standards (such as medical imaging analysis with Computer-Aided Detection and Diagnosis systems - CAD) that require strong demonstrable proofs of effectiveness and repeatability of results across the population. It is our opinion that in those cases it is of crucial importance to clarify if and to what extent a Deep Learning based application is stable and repeatable as well as effective, across different environmental (hardware and software) configurations. Therefore, the aim of this work is to quantitatively analyze the reproducibility problem of Convolutional Neural Networks (CNN) based approaches for the biomedical image processing, in order to highlight the impact that a given software framework and hardware configurations might have when facing the same problem by the same means. In particular, we analyzed the problem of breast tissue segmentation in DCE-MRI by using a modified version of a 2D U-Net CNN, a very effective deep architecture for semantic segmentation, using two Deep Learning frameworks (MATLAB and TensorFlow) across different hardware configurations.},\n  keywords = {biomedical MRI;computer vision;convolutional neural nets;image segmentation;learning (artificial intelligence);medical image processing;optimisation;probability;deep CNN;biomedical image processing;computer vision;time-series analysis;software configurations;inference times;reproducibility assessment;blunter environmental training conditions;nontrivial problem;deep learning based applications;nondeterministic behavior;medical imaging analysis;convolutional neural networks based approaches;effective deep architecture;hardware configurations;automatic speech recognition;probabilistic considerations;breast tissue segmentation;semantic segmentation;optimization phases;Deep learning;Training;Mathematical model;Hardware;Graphics processing units;Biomedical image processing},\n  doi = {10.23919/EUSIPCO.2019.8902690},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533897.pdf},\n}\n\n
\n
\n\n\n
\n With the increasing spread of easy and effective frameworks, in recent years Deep Learning approaches are becoming more and more used in several application fields, including computer vision (such as natural and biomedical image processing), automatic speech recognition (ASR) and time-series analysis. If, on one hand, the availability of such frameworks allows developers to use the one they feel more comfortable with, on the other, it raises questions related to the reproducibility of the designed model across different hardware and software configurations, both at training and at inference times. The reproducibility assessment is important to determine if the resulting model produces good or bad outcomes just because of luckier or blunter environmental training conditions. This is a non-trivial problem for Deep Learning based applications, not only because their training and optimization phases strongly rely on stochastic procedures, but also because of the use of some heuristic considerations (mainly speculative procedures) at training time that, although they help in reducing the required computational effort, tend to introduce non-deterministic behavior, with a direct impact on the results and on the model's reproducibility. Usually, to face this problem, designers make use of probabilistic considerations about the distribution of data or focus their attention on very huge datasets. However, this kind of approach does not really fit some application field standards (such as medical imaging analysis with Computer-Aided Detection and Diagnosis systems - CAD) that require strong demonstrable proofs of effectiveness and repeatability of results across the population. It is our opinion that in those cases it is of crucial importance to clarify if and to what extent a Deep Learning based application is stable and repeatable as well as effective, across different environmental (hardware and software) configurations. Therefore, the aim of this work is to quantitatively analyze the reproducibility problem of Convolutional Neural Networks (CNN) based approaches for the biomedical image processing, in order to highlight the impact that a given software framework and hardware configurations might have when facing the same problem by the same means. In particular, we analyzed the problem of breast tissue segmentation in DCE-MRI by using a modified version of a 2D U-Net CNN, a very effective deep architecture for semantic segmentation, using two Deep Learning frameworks (MATLAB and TensorFlow) across different hardware configurations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the segmentation of plantar foot thermal images with Deep Learning.\n \n \n \n \n\n\n \n Bougrine, A.; Harba, R.; Canals, R.; Ledee, R.; and Jabloun, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902691,\n  author = {A. Bougrine and R. Harba and R. Canals and R. Ledee and M. Jabloun},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {On the segmentation of plantar foot thermal images with Deep Learning},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Foot ulceration can be prevented by using thermal information of the plantar foot surface. Indeed, important indicators can be provided with a thermal infrared image. As part of a non-constraining acquisition protocol, these images are freehandedly taken with a smartphone equipped by a dedicated thermal camera. A total of 248 images have been obtained from an acquisition campaign composed of control and pathological subjects. Our aim is the segmentation of these plantar foot thermal images. To that end, we compare three different deep learning methods namely, the Fully Convolutional Networks (FCN), SegNet, U-Net, and the previously proposed prior shape active contour-based method. 80% of our database serves to train the 3 deep learning networks and 20% are used for the test. When applied to our data, results show that the SegNet method outperforms the three other methods with a Dice Similarity Coefficient (DSC) equal to 97.26%. This method also shows efficiency in segmenting both feet simultaneously with a DSC equal to 96.8% for a smartphone based plantar foot thermal analysis for diabetic patients.},\n  keywords = {diseases;image classification;image segmentation;learning (artificial intelligence);medical image processing;patient monitoring;smart phones;foot ulceration;thermal information;plantar foot surface;thermal infrared image;nonconstraining acquisition protocol;thermal camera;plantar foot thermal images;smartphone based plantar foot thermal analysis;deep learning networks;shape active contour-based method;deep learning methods;Dice similarity coefficient;SegNet method;Foot;Image segmentation;Shape;Convolution;Deep learning;Databases;Diabetes;Plantar foot thermal images;Deep Learning;prior shape active contour;image segmentation.},\n  doi = {10.23919/EUSIPCO.2019.8902691},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533332.pdf},\n}\n\n
\n
\n\n\n
\n Foot ulceration can be prevented by using thermal information of the plantar foot surface. Indeed, important indicators can be provided with a thermal infrared image. As part of a non-constraining acquisition protocol, these images are freehandedly taken with a smartphone equipped by a dedicated thermal camera. A total of 248 images have been obtained from an acquisition campaign composed of control and pathological subjects. Our aim is the segmentation of these plantar foot thermal images. To that end, we compare three different deep learning methods namely, the Fully Convolutional Networks (FCN), SegNet, U-Net, and the previously proposed prior shape active contour-based method. 80% of our database serves to train the 3 deep learning networks and 20% are used for the test. When applied to our data, results show that the SegNet method outperforms the three other methods with a Dice Similarity Coefficient (DSC) equal to 97.26%. This method also shows efficiency in segmenting both feet simultaneously with a DSC equal to 96.8% for a smartphone based plantar foot thermal analysis for diabetic patients.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-Complexity Switching Network Design for Hybrid Precoding in mmWave MIMO Systems.\n \n \n \n \n\n\n \n Molina, F.; and Borràs, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Low-ComplexityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902692,\n  author = {F. Molina and J. {Borràs}},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Low-Complexity Switching Network Design for Hybrid Precoding in mmWave MIMO Systems},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper deals with the design of a hybrid precoder for millimeter-wave MIMO systems. For the sake of concreteness, we consider an analog processing stage composed of a switching network with analog combining. The main contribution of this work consists on the proposal and evaluation of an optimization procedure based on a smart relaxation. The optimal hybrid precoder under a transmit power constraint is derived, after which, the analog precoding matrix is binarized. After an intuitive reasoning, we note that multiple solutions exist. Nevertheless, the (very) reduced computational complexity of the proposed optimization scheme makes it feasible for realistic implementations. Numerical results are reported to assess the performance of proposed hybrid precoder design.},\n  keywords = {computational complexity;millimetre wave communication;MIMO communication;optimisation;precoding;telecommunication switching;low-complexity switching network design;hybrid precoding;mmWave MIMO systems;millimeter-wave MIMO systems;analog processing stage;analog combining;optimization procedure;smart relaxation;optimal hybrid precoder;transmit power constraint;analog precoding matrix;computational complexity;optimization scheme;proposed hybrid precoder design;Radio frequency;Switches;MIMO communication;Optimization;Precoding;Computer architecture;Europe;Hybrid MIMO;switching network with analog combiner;precoder design;mmWave MIMO},\n  doi = {10.23919/EUSIPCO.2019.8902692},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570524345.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with the design of a hybrid precoder for millimeter-wave MIMO systems. For the sake of concreteness, we consider an analog processing stage composed of a switching network with analog combining. The main contribution of this work consists on the proposal and evaluation of an optimization procedure based on a smart relaxation. The optimal hybrid precoder under a transmit power constraint is derived, after which, the analog precoding matrix is binarized. After an intuitive reasoning, we note that multiple solutions exist. Nevertheless, the (very) reduced computational complexity of the proposed optimization scheme makes it feasible for realistic implementations. Numerical results are reported to assess the performance of proposed hybrid precoder design.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sound-based Distance Estimation for Indoor Navigation in the Presence of Ego Noise.\n \n \n \n \n\n\n \n Saqib, U.; and Jensen, J. R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Sound-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902694,\n  author = {U. Saqib and J. R. Jensen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Sound-based Distance Estimation for Indoor Navigation in the Presence of Ego Noise},\n  year = {2019},\n  pages = {1-5},\n  abstract = {An off-the-shelf drone for indoor operation would come with a variety of different sensors that are used concurrently to avoid collision with, e.g., walls, but these sensors are typically uni-directional and offers limited spatial awareness. In this paper, we propose a model-based technique for distance estimation using sound and its reflections. More specifically, the technique is estimating Time-of-Arrivals (TOAs) of the reflected sound that could infer knowledge about room geometry and help in the design of sound-based collision avoidance. Our proposed solution is thus based on probing a known sound into an environment and then estimating the TOAs of reflected sounds recorded by a single microphone. The simulated results show that our approach to estimating TOAs for reflector position estimation works up to a distance of at least 2 meters even with significant additive noise, e.g., drone ego noise.},\n  keywords = {collision avoidance;microphones;mobile robots;position control;remotely operated vehicles;time-of-arrival estimation;indoor navigation;off-the-shelf drone;indoor operation;time-of-arrivals;reflected sound;room geometry;sound-based collision avoidance;reflector position estimation;drone ego noise;sound-based distance estimation;Estimation;Microphones;Drones;Acoustics;Loudspeakers;Noise measurement;Sensors;robotics;room geometry estimation;acoustic impulse response},\n  doi = {10.23919/EUSIPCO.2019.8902694},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534429.pdf},\n}\n
\n
\n\n\n
\n An off-the-shelf drone for indoor operation would come with a variety of different sensors that are used concurrently to avoid collision with, e.g., walls, but these sensors are typically uni-directional and offers limited spatial awareness. In this paper, we propose a model-based technique for distance estimation using sound and its reflections. More specifically, the technique is estimating Time-of-Arrivals (TOAs) of the reflected sound that could infer knowledge about room geometry and help in the design of sound-based collision avoidance. Our proposed solution is thus based on probing a known sound into an environment and then estimating the TOAs of reflected sounds recorded by a single microphone. The simulated results show that our approach to estimating TOAs for reflector position estimation works up to a distance of at least 2 meters even with significant additive noise, e.g., drone ego noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Probabilistic Method to Find and Visualize Distinct Regions in Protein Sequences.\n \n \n \n \n\n\n \n Hosseini, M.; Pratas, D.; and Pinho, A. J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902695,\n  author = {M. Hosseini and D. Pratas and A. J. Pinho},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Probabilistic Method to Find and Visualize Distinct Regions in Protein Sequences},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Studies on identification of species-specific protein regions, i.e., unique or highly dissimilar regions with respect to close species, will lead us to understanding of evolutionary traits, which can be related to novel functionalities or diseases. In this paper, we propose an alignment-free method to find and visualize distinct regions between two collections of proteins. We applied the proposed method, FRUIT, on multiple synthetic and real datasets to analyze its behavior when different rates of substitutional mutation occur. Testing with different k-mer sizes showed that the higher the mutation rate, the higher the relative uniqueness. We also employed FRUIT to find and visualize distinct regions in modern human proteins relatively to the proteins of Altai, Sidron and Vindija Neanderthals. The results show that four of the most distinct proteins, named ataxin -8, 60S ribosomal protein L26, NADH-ubiquinone oxidoreductase chain 3 and cytochrome c oxidase subunit 2 are involved in SCA8, DBAII, LS and MT-CID, and MT-C4D diseases, respectively. There is also Interferon-induced transmembrane protein 3, among others, which is part of the immune system. Besides, we report the most similar primate exomes to the found modern human one, in terms of identity, query cover and length of sequences. The reported results can give us insight to the evolution of proteomes.},\n  keywords = {biochemistry;biology computing;biomembranes;diseases;enzymes;food products;genetics;genomics;microorganisms;molecular biophysics;molecular configurations;proteomics;transmembrane protein;60S ribosomal protein L26;distinct proteins;modern human proteins;relative uniqueness;higher the mutation rate;k-mer sizes;alignment-free method;highly dissimilar regions;species-specific protein regions;protein sequences;probabilistic method;conductance 60.0 S;Proteins;Hash functions;Amino acids;Tools;Visualization;Probabilistic logic;Diseases;palaeoproteomics;Neanderthals;alignment-free method;relative uniqueness;Bloom filter},\n  doi = {10.23919/EUSIPCO.2019.8902695},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533089.pdf},\n}\n\n
\n
\n\n\n
\n Studies on identification of species-specific protein regions, i.e., unique or highly dissimilar regions with respect to close species, will lead us to understanding of evolutionary traits, which can be related to novel functionalities or diseases. In this paper, we propose an alignment-free method to find and visualize distinct regions between two collections of proteins. We applied the proposed method, FRUIT, on multiple synthetic and real datasets to analyze its behavior when different rates of substitutional mutation occur. Testing with different k-mer sizes showed that the higher the mutation rate, the higher the relative uniqueness. We also employed FRUIT to find and visualize distinct regions in modern human proteins relatively to the proteins of Altai, Sidron and Vindija Neanderthals. The results show that four of the most distinct proteins, named ataxin -8, 60S ribosomal protein L26, NADH-ubiquinone oxidoreductase chain 3 and cytochrome c oxidase subunit 2 are involved in SCA8, DBAII, LS and MT-CID, and MT-C4D diseases, respectively. There is also Interferon-induced transmembrane protein 3, among others, which is part of the immune system. Besides, we report the most similar primate exomes to the found modern human one, in terms of identity, query cover and length of sequences. The reported results can give us insight to the evolution of proteomes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Signal Recovery from Phaseless Measurements of Spherical Harmonics Expansion.\n \n \n \n \n\n\n \n Bangun, A.; Behboodi, A.; and Mathar, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SignalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902696,\n  author = {A. Bangun and A. Behboodi and R. Mathar},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Signal Recovery from Phaseless Measurements of Spherical Harmonics Expansion},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this work, we study the problem of recovering spherical harmonics coefficients from phaseless measurements and evaluate the empirical performance of several well-known algorithms. Apart from trivial ambiguities that arise naturally from the properties of spherical harmonics, we will show that when a specific class of equiangular sampling patterns is chosen to construct the measurement matrix, another ambiguity appears as well. Nevertheless, we will numerically show that recovery can be achieved by carefully choosing the appropriate sampling patterns. Furthermore, an application of this work in phaseless spherical near-field antenna measurements will be addressed.},\n  keywords = {matrix algebra;signal sampling;equiangular sampling patterns;measurement matrix;appropriate sampling patterns;phaseless spherical near-field antenna measurements;signal recovery;phaseless measurements;spherical harmonics expansion;spherical harmonics coefficients;empirical performance;trivial ambiguities;Harmonic analysis;Phase measurement;Antenna measurements;Signal processing algorithms;Europe;Optical variables measurement;Phase retrieval;spherical harmonics;spherical near-field measurements},\n  doi = {10.23919/EUSIPCO.2019.8902696},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534145.pdf},\n}\n\n
\n
\n\n\n
\n In this work, we study the problem of recovering spherical harmonics coefficients from phaseless measurements and evaluate the empirical performance of several well-known algorithms. Apart from trivial ambiguities that arise naturally from the properties of spherical harmonics, we will show that when a specific class of equiangular sampling patterns is chosen to construct the measurement matrix, another ambiguity appears as well. Nevertheless, we will numerically show that recovery can be achieved by carefully choosing the appropriate sampling patterns. Furthermore, an application of this work in phaseless spherical near-field antenna measurements will be addressed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Novel Sensing Mechanism for Full-Duplex Secondary Users in Cognitive Radio.\n \n \n \n \n\n\n \n Mortada, M. R.; Nasser, A.; Mansour, A.; and Ya, K. C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"NovelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902697,\n  author = {M. R. Mortada and A. Nasser and A. Mansour and K. C. Ya},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Novel Sensing Mechanism for Full-Duplex Secondary Users in Cognitive Radio},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we present a new Transmitting-Receiving-Sensing (TRS) mechanism for full-duplex cognitive radio. Our proposed mechanism permits Secondary Users (SUs) to establish a bidirectional communication over the same frequency band while keeping aware of the Primary User (PU) activity status. The activity period of SU in our proposed mechanism is composed of two stages: In the first stage SU communicates in bidirectional way with his peer SU. At the second stage, one of the SUs becomes silent in order to do not disturb his peer, which performs a spectrum sensing and remains active at the same time using self-interference cancellation technique. The probability of collision related to our mechanism is derived as well as the probability of waste and the average throughput. Our simulation results show that the proposed mechanism can significantly decrease the probability of collision at low SNRp (Signal to Noise Ratio of PU at SU).},\n  keywords = {cognitive radio;cooperative communication;interference suppression;probability;radio spectrum management;radiofrequency interference;signal detection;TRS;full-duplex cognitive radio;bidirectional communication;frequency band;activity period;stage SU communicates;peer SU;spectrum sensing;self-interference cancellation technique;primary user activity status;transmitting-receiving-sensing mechanism;full-duplex secondary users;Sensors;Switches;Interference;Throughput;Europe;Signal processing;Cognitive radio;Cognitive Radio;Full-Duplex;Spectrum Sens-ing;Self-Interference Cancellation;Transmitting-Receiving-Sensing},\n  doi = {10.23919/EUSIPCO.2019.8902697},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533718.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present a new Transmitting-Receiving-Sensing (TRS) mechanism for full-duplex cognitive radio. Our proposed mechanism permits Secondary Users (SUs) to establish a bidirectional communication over the same frequency band while keeping aware of the Primary User (PU) activity status. The activity period of SU in our proposed mechanism is composed of two stages: In the first stage SU communicates in bidirectional way with his peer SU. At the second stage, one of the SUs becomes silent in order to do not disturb his peer, which performs a spectrum sensing and remains active at the same time using self-interference cancellation technique. The probability of collision related to our mechanism is derived as well as the probability of waste and the average throughput. Our simulation results show that the proposed mechanism can significantly decrease the probability of collision at low SNRp (Signal to Noise Ratio of PU at SU).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Semi-Supervised Adaptive Learning for Decoding Movement Intent from Electromyograms.\n \n \n \n \n\n\n \n Dantas, H.; Mathews, V. J.; and Warren, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Semi-SupervisedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902698,\n  author = {H. Dantas and V. J. Mathews and D. Warren},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Semi-Supervised Adaptive Learning for Decoding Movement Intent from Electromyograms},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper presents an adaptive learning algorithm for predicting movement intent using electromyogram (EMG) signals and controlling a prosthetic arm. The adaptive decoder enables use of the prosthetic systems for long periods of time without the necessity to retrain them. The method of this paper employs a neural network-based decoder and we present a method to update its parameters during the operation phase. Initially, the decoder parameters are estimated during a training phase. During the normal operation, the parameters of the algorithm are updated in a semi-supervised manner based on a movement model. The results presented here, obtained from a single amputee subject, suggest that the approach of this paper improves long-term performance of the decoders over the current state-of-the-art with statistical significance.},\n  keywords = {decoding;electromyography;learning (artificial intelligence);medical control systems;medical signal processing;neural nets;neurophysiology;prosthetics;semisupervised adaptive learning;movement intent;electromyograms;adaptive learning algorithm;electromyogram signals;EMG;prosthetic arm;adaptive decoder;prosthetic systems;neural network-based decoder;operation phase;decoder parameters;training phase;normal operation;semisupervised manner;movement model;decoders;Decoding;Electromyography;Trajectory;Adaptation models;Training;Kinematics;Mathematical model;Kalman Filter;Neural Networks;Markov Decision Processes;Movement Intent Decoder;Semi-supervised Learning;Online Learning},\n  doi = {10.23919/EUSIPCO.2019.8902698},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531096.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents an adaptive learning algorithm for predicting movement intent using electromyogram (EMG) signals and controlling a prosthetic arm. The adaptive decoder enables use of the prosthetic systems for long periods of time without the necessity to retrain them. The method of this paper employs a neural network-based decoder and we present a method to update its parameters during the operation phase. Initially, the decoder parameters are estimated during a training phase. During the normal operation, the parameters of the algorithm are updated in a semi-supervised manner based on a movement model. The results presented here, obtained from a single amputee subject, suggest that the approach of this paper improves long-term performance of the decoders over the current state-of-the-art with statistical significance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Breathing Rate Complexity Features for “In-the-Wild” Stress and Anxiety Measurement.\n \n \n \n\n\n \n Tiwari, A.; Narayanan, S.; and Falk, T. H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902700,\n  author = {A. Tiwari and S. Narayanan and T. H. Falk},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Breathing Rate Complexity Features for “In-the-Wild” Stress and Anxiety Measurement},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Features extracted from respiratory activity signals have been shown to carry information about mental states such as anxiety and mental stress. Such findings, however, are based on studies conducted mostly in controlled laboratory environments with artificially-induced psychological responses. While this assures that high quality data are collected, the amount of data is limited and the transferability of the findings to more ecologically-appropriate natural settings (i.e., “in-the-wild”) remains unknown. In this paper, we propose new non-linear complexity measures computed from four different respiration activity time series (i.e., inter-breath interval, inhale-to-exhale ratio, inhale/exhale amplitude envelope, and interbreath difference) and show their discriminatory power for anxiety and stress monitoring in the workplace. The new features are tested on a dataset collected from 200 hospital workers (nurses and staff) during their normal work shifts. The proposed features are shown to be complementary to conventional measures of breathing rate and depth.},\n  keywords = {diseases;feature extraction;medical signal processing;patient care;pneumodynamics;psychology;time series;artificially-induced psychological responses;ecologically-appropriate natural settings;nonlinear complexity measures;inter-breath interval;stress monitoring;respiratory activity signals;mental states;breathing rate complexity features;anxiety measurement;respiration activity time series;in-the-wild stress;feature extraction;Feature extraction;Stress;Benchmark testing;Biomedical measurement;Stress measurement;Time series analysis;Entropy},\n  doi = {10.23919/EUSIPCO.2019.8902700},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Features extracted from respiratory activity signals have been shown to carry information about mental states such as anxiety and mental stress. Such findings, however, are based on studies conducted mostly in controlled laboratory environments with artificially-induced psychological responses. While this assures that high quality data are collected, the amount of data is limited and the transferability of the findings to more ecologically-appropriate natural settings (i.e., “in-the-wild”) remains unknown. In this paper, we propose new non-linear complexity measures computed from four different respiration activity time series (i.e., inter-breath interval, inhale-to-exhale ratio, inhale/exhale amplitude envelope, and interbreath difference) and show their discriminatory power for anxiety and stress monitoring in the workplace. The new features are tested on a dataset collected from 200 hospital workers (nurses and staff) during their normal work shifts. The proposed features are shown to be complementary to conventional measures of breathing rate and depth.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n ExcitNet Vocoder: A Neural Excitation Model for Parametric Speech Synthesis Systems.\n \n \n \n \n\n\n \n Song, E.; Byun, K.; and Kang, H. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ExcitNetPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902701,\n  author = {E. Song and K. Byun and H. -G. Kang},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {ExcitNet Vocoder: A Neural Excitation Model for Parametric Speech Synthesis Systems},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper proposes a WaveNet-based neural excitation model (ExcitNet) for statistical parametric speech synthesis systems. Conventional WaveNet-based neural vocoding systems significantly improve the perceptual quality of synthesized speech by statistically generating a time sequence of speech waveforms through an auto-regressive framework. However, they often suffer from noisy outputs because of the difficulties in capturing the complicated time-varying nature of speech signals. To improve modeling efficiency, the proposed ExcitNet vocoder employs an adaptive inverse filter to decouple spectral components from the speech signal. The residual component, i.e. excitation signal, is then trained and generated within the WaveNet framework. In this way, the quality of the synthesized speech signal can be further improved since the spectral component is well represented by a deep learning framework and, moreover, the residual component is efficiently generated by the WaveNet framework. Experimental results show that the proposed ExcitNet vocoder, trained both speaker-dependently and speaker-independently, outperforms traditional linear prediction vocoders and similarly configured conventional WaveNet vocoders.},\n  keywords = {adaptive filters;autoregressive processes;learning (artificial intelligence);neural nets;speech coding;speech synthesis;statistical analysis;vocoders;auto-regressive framework;ExcitNet vocoder;decouple spectral components;residual component;excitation signal;WaveNet framework;synthesized speech signal;spectral component;deep learning framework;traditional linear prediction vocoders;conventional WaveNet vocoders;WaveNet-based neural excitation model;statistical parametric speech synthesis systems;perceptual quality;time sequence;speech waveforms;conventional wavenet-based neural vocoding systems;speech signals;adaptive inverse filter;Vocoders;Acoustics;Training;Speech synthesis;Convolution;Linguistics;Feature extraction;Speech synthesis;WaveNet;ExcitNet},\n  doi = {10.23919/EUSIPCO.2019.8902701},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570527986.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a WaveNet-based neural excitation model (ExcitNet) for statistical parametric speech synthesis systems. Conventional WaveNet-based neural vocoding systems significantly improve the perceptual quality of synthesized speech by statistically generating a time sequence of speech waveforms through an auto-regressive framework. However, they often suffer from noisy outputs because of the difficulties in capturing the complicated time-varying nature of speech signals. To improve modeling efficiency, the proposed ExcitNet vocoder employs an adaptive inverse filter to decouple spectral components from the speech signal. The residual component, i.e. excitation signal, is then trained and generated within the WaveNet framework. In this way, the quality of the synthesized speech signal can be further improved since the spectral component is well represented by a deep learning framework and, moreover, the residual component is efficiently generated by the WaveNet framework. Experimental results show that the proposed ExcitNet vocoder, trained both speaker-dependently and speaker-independently, outperforms traditional linear prediction vocoders and similarly configured conventional WaveNet vocoders.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Low-Computation-Cycle Design of Input-Decimation Technique for RIDFT Algorithm.\n \n \n \n \n\n\n \n Wu, C. -.; Chen, C. -.; and Shiue, M. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902702,\n  author = {C. -F. Wu and C. -H. Chen and M. -T. Shiue},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Low-Computation-Cycle Design of Input-Decimation Technique for RIDFT Algorithm},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, a low-computation-cycle and energy-efficient design of input-decimation technique for the recursive inverse discrete Fourier transform (RIDFT) algorithm is proposed for the high-speed broadband communication systems. It is crucial that the input-decimation technique is presented to decrease the number of input sequences for the recursive filter so that the computation cycle of RIDFT can be shortened to meet the computing time requirement (3.6 μs). Therefore, the input-decimation RIDFT algorithm is able to carry out at least 55.5% reduction of the total computation cycles compared with the considered algorithms. Holding the advantages of input-decimation technique, the computational complexities of the real-multiplication and -addition are reduced to 41.3% and 22.2%, respectively. Finally, the physical implementation results show that the core area is 0.37× 0.37mm2 with 0.18 μm CMOS process. The power consumption is 5.16 mW with the supply voltage of 1.8 V and the operating clock of 40 MHz. The proposed design can achieve 258 million of computational efficiency per unit area (CEUA) and really outperform the previous works.},\n  keywords = {CMOS integrated circuits;discrete Fourier transforms;inverse transforms;receivers;input-decimation technique;computational complexities;input sequences;computing time requirement;input-decimation RIDFT algorithm;low-computation-cycle;computational efficiency per unit area;CEUA;recursive inverse discrete Fourier transform algorithm;CMOS process;time 3.6 mus;size 0.18 mum;power 5.16 mW;voltage 1.8 V;frequency 40.0 MHz;Signal processing algorithms;Hardware;Kernel;Array signal processing;Computational complexity;OFDM;Discrete Fourier transforms;recursive inverse discrete Fourier transform (RIDFT);orthogonal frequency-division multiplexing (OFDM)},\n  doi = {10.23919/EUSIPCO.2019.8902702},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533487.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a low-computation-cycle and energy-efficient design of input-decimation technique for the recursive inverse discrete Fourier transform (RIDFT) algorithm is proposed for the high-speed broadband communication systems. It is crucial that the input-decimation technique is presented to decrease the number of input sequences for the recursive filter so that the computation cycle of RIDFT can be shortened to meet the computing time requirement (3.6 μs). Therefore, the input-decimation RIDFT algorithm is able to carry out at least 55.5% reduction of the total computation cycles compared with the considered algorithms. Holding the advantages of input-decimation technique, the computational complexities of the real-multiplication and -addition are reduced to 41.3% and 22.2%, respectively. Finally, the physical implementation results show that the core area is 0.37× 0.37mm2 with 0.18 μm CMOS process. The power consumption is 5.16 mW with the supply voltage of 1.8 V and the operating clock of 40 MHz. The proposed design can achieve 258 million of computational efficiency per unit area (CEUA) and really outperform the previous works.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Context-Aware Neural Voice Activity Detection Using Auxiliary Networks for Phoneme Recognition, Speech Enhancement and Acoustic Scene Classification.\n \n \n \n \n\n\n \n Masumura, R.; Matsui, K.; Koizumi, Y.; Fukutomi, T.; Oba, T.; and Aono, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Context-AwarePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902703,\n  author = {R. Masumura and K. Matsui and Y. Koizumi and T. Fukutomi and T. Oba and Y. Aono},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Context-Aware Neural Voice Activity Detection Using Auxiliary Networks for Phoneme Recognition, Speech Enhancement and Acoustic Scene Classification},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper proposes a novel fully neural network based voice activity detection (VAD) method that estimates whether each speech segment is speech or non-speech even in very low signal-to-noise ratio (SNR) environments. Our innovation is to improve context-awareness of speech variability by introducing multiple auxiliary networks into the neural VAD framework. While previous studies reported that phonetic-aware auxiliary features extracted from a phoneme recognition network can improve VAD performance, none examined other effective auxiliary features for enhancing noise robustness. Thus, this paper present a neural VAD that uses auxiliary features extracted from not only the phoneme recognition network but also a speech enhancement network and an acoustic scene classification network. The last two networks are expected to improve context-awareness even in extremely low SNR environments since they can extract de-noised speech awareness and noisy environment awareness. In addition, we expect that combining these multiple auxiliary features yield synergistic improvements in VAD performance. Experiments verify the superiority of the proposed method in very low SNR environments.},\n  keywords = {acoustic signal processing;feature extraction;neural nets;signal classification;signal denoising;speech enhancement;speech recognition;ubiquitous computing;voice activity detection;context-awareness;speech variability;multiple auxiliary networks;neural VAD framework;phonetic-aware auxiliary features;phoneme recognition network;VAD performance;effective auxiliary features;noise robustness;speech enhancement network;acoustic scene classification network;extremely low SNR environments;noisy environment awareness;multiple auxiliary features;context-aware neural voice activity detection;speech segment;low signal-to-noise ratio;fully neural network based voice activity detection method;denoised speech awareness;Acoustics;Feature extraction;Speech enhancement;Speech recognition;Signal to noise ratio;Neural networks;Voice activity detection},\n  doi = {10.23919/EUSIPCO.2019.8902703},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533421.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a novel fully neural network based voice activity detection (VAD) method that estimates whether each speech segment is speech or non-speech even in very low signal-to-noise ratio (SNR) environments. Our innovation is to improve context-awareness of speech variability by introducing multiple auxiliary networks into the neural VAD framework. While previous studies reported that phonetic-aware auxiliary features extracted from a phoneme recognition network can improve VAD performance, none examined other effective auxiliary features for enhancing noise robustness. Thus, this paper present a neural VAD that uses auxiliary features extracted from not only the phoneme recognition network but also a speech enhancement network and an acoustic scene classification network. The last two networks are expected to improve context-awareness even in extremely low SNR environments since they can extract de-noised speech awareness and noisy environment awareness. In addition, we expect that combining these multiple auxiliary features yield synergistic improvements in VAD performance. Experiments verify the superiority of the proposed method in very low SNR environments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Novel Compressive Sensing Scheme under the Variational Bayesian Framework.\n \n \n \n \n\n\n \n Oikonomou, V. P.; Nikolopoulos, S.; and Kompatsiaris, I.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902704,\n  author = {V. P. Oikonomou and S. Nikolopoulos and I. Kompatsiaris},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Novel Compressive Sensing Scheme under the Variational Bayesian Framework},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this work we provide a novel algorithm for Bayesian Compressive Sensing. The proposed algorithm is considered for signals that features two properties: grouping structure and sparsity between groups. The Compressive Sensing problem is formulated using the Bayesian linear model. Furthermore, the sparsity of the unknown signal is modeled by a parameterized sparse prior while the inference procedure is conducted using the Variational Bayesian framework. Experimental results, using 1D and 2D signals, demonstrate that the proposed algorithm provides superior performance compared to state-of-the-art Compressive Sensing reconstruction algorithms.},\n  keywords = {Bayes methods;compressed sensing;grouping structure;Bayesian linear model;unknown signal;Variational Bayesian framework;Bayesian Compressive Sensing;Compressive Sensing scheme;Signal processing algorithms;Bayes methods;Image reconstruction;Noise measurement;Compressed sensing;Measurement uncertainty;Gaussian distribution;compressed sensing;group sparsity;parameterized prior;variational bayesian},\n  doi = {10.23919/EUSIPCO.2019.8902704},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532416.pdf},\n}\n\n
\n
\n\n\n
\n In this work we provide a novel algorithm for Bayesian Compressive Sensing. The proposed algorithm is considered for signals that features two properties: grouping structure and sparsity between groups. The Compressive Sensing problem is formulated using the Bayesian linear model. Furthermore, the sparsity of the unknown signal is modeled by a parameterized sparse prior while the inference procedure is conducted using the Variational Bayesian framework. Experimental results, using 1D and 2D signals, demonstrate that the proposed algorithm provides superior performance compared to state-of-the-art Compressive Sensing reconstruction algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An NMF-Based Approach for Hyperspectral Unmixing Using a New Multiplicative-tuning Linear Mixing Model to Address Spectral Variability.\n \n \n \n \n\n\n \n Benhalouche, F. Z.; Karoui, M. S.; and Deville, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902707,\n  author = {F. Z. Benhalouche and M. S. Karoui and Y. Deville},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {An NMF-Based Approach for Hyperspectral Unmixing Using a New Multiplicative-tuning Linear Mixing Model to Address Spectral Variability},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this work, a new approach is presented for unmixing remote sensing hyperspectral data. This approach considers a linear mixing model that is introduced in these investigations to handle the spectral variability phenomenon, which is usually observed in the considered data and which is here modeled in a multiplicative form. The proposed algorithm, which is based on a pixel-by-pixel non-negative matrix factorization method, uses multiplicative update rules for minimizing a cost function that takes into account the introduced linear mixing model. Tests, by means of realistic synthetic data, are conducted to evaluate the performance of the proposed approach, and the obtained results are compared to those of methods from the literature. These test results show that the proposed approach outperforms all other tested methods.},\n  keywords = {geophysical image processing;hyperspectral imaging;remote sensing;NMF-based approach;hyperspectral unmixing;spectral variability;unmixing remote sensing hyperspectral data;multiplicative form;pixel-by-pixel nonnegative matrix factorization method;multiplicative update rules;realistic synthetic data;multiplicative-tuning linear mixing model;Data models;Hyperspectral imaging;Cost function;Reflectivity;Standards;Hyperspectral imaging;linear spectral unmixing;spectral variability;non-negative matrix factorization;multiplicative update rules},\n  doi = {10.23919/EUSIPCO.2019.8902707},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533687.pdf},\n}\n\n
\n
\n\n\n
\n In this work, a new approach is presented for unmixing remote sensing hyperspectral data. This approach considers a linear mixing model that is introduced in these investigations to handle the spectral variability phenomenon, which is usually observed in the considered data and which is here modeled in a multiplicative form. The proposed algorithm, which is based on a pixel-by-pixel non-negative matrix factorization method, uses multiplicative update rules for minimizing a cost function that takes into account the introduced linear mixing model. Tests, by means of realistic synthetic data, are conducted to evaluate the performance of the proposed approach, and the obtained results are compared to those of methods from the literature. These test results show that the proposed approach outperforms all other tested methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Applications of Projected Belief Networks (PBN).\n \n \n \n \n\n\n \n Baggenstoss, P. M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ApplicationsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902708,\n  author = {P. M. Baggenstoss},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Applications of Projected Belief Networks (PBN)},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The projected belief network (PBN) is a layered generative network, with tractable likelihood function (LF) that can be trained by gradient ascent as a probability density function (PDF) estimator and classifier. The PBN is derived from a feed-forward neural network (FF-NN) by finding the generative network that implements the probability distribution with maximum entropy (MaxEnt) consistent with the knowledge of the distribution at the output of the FF-NN. The FF-NN, from which the PBN is derived, is a complementary feature extractor that exactly recovers the PBN's hidden variables. This paper presents a multi-layer PBN and a deterministic PBN that are tested using a subset of MNIST data set. When the deterministic PBN is combined with the dual FF-NN, it forms an auto-encoder that achieves much lower reconstruction error on testing data than the equivalent conventional network and functions significantly better as a classifier.},\n  keywords = {belief networks;feature extraction;feedforward neural nets;gradient methods;learning (artificial intelligence);maximum entropy methods;maximum likelihood estimation;probability;projected belief networks;layered generative network;tractable likelihood function;probability density function estimator;feed-forward neural network;probability distribution;PBN's hidden variables;multilayer PBN;deterministic PBN;dual FF-NN;equivalent conventional network;maximum entropy;MNIST data set;Neural networks;Feature extraction;Data models;Europe;Signal processing;Entropy;Stochastic processes},\n  doi = {10.23919/EUSIPCO.2019.8902708},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529856.pdf},\n}\n\n
\n
\n\n\n
\n The projected belief network (PBN) is a layered generative network, with tractable likelihood function (LF) that can be trained by gradient ascent as a probability density function (PDF) estimator and classifier. The PBN is derived from a feed-forward neural network (FF-NN) by finding the generative network that implements the probability distribution with maximum entropy (MaxEnt) consistent with the knowledge of the distribution at the output of the FF-NN. The FF-NN, from which the PBN is derived, is a complementary feature extractor that exactly recovers the PBN's hidden variables. This paper presents a multi-layer PBN and a deterministic PBN that are tested using a subset of MNIST data set. When the deterministic PBN is combined with the dual FF-NN, it forms an auto-encoder that achieves much lower reconstruction error on testing data than the equivalent conventional network and functions significantly better as a classifier.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Exact Multiplicative Factor Updates for Convolutional Beta-NMF in 2D.\n \n \n \n \n\n\n \n Villasana T., P. J.; and Gorlow, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ExactPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902709,\n  author = {P. J. {Villasana T.} and S. Gorlow},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Exact Multiplicative Factor Updates for Convolutional Beta-NMF in 2D},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper we extend the convolutional NMF with the beta-divergence as cost function to two dimensions and derive exact multiplicative updates for its factors. Our updates correct and generalize the nonnegative matrix factor deconvolution, as proposed by Schmidt and Mφrup. We prove that the cost is non-increasing under the new updates for beta between 0 and 2. By numerical simulation we confirm that both the cost's mean and standard deviation are monotonically decreasing in a consistent manner across the most common values for beta.},\n  keywords = {convolution;deconvolution;matrix decomposition;exact multiplicative factor updates;convolutional beta-NMF;2D;nonnegative matrix factor deconvolution;cost function;beta-divergence;Convolution;Two dimensional displays;Europe;Cost function;Convergence;Deconvolution;Nonnegative matrix factorization;multiplicative updates;beta-divergence;convolution;2D},\n  doi = {10.23919/EUSIPCO.2019.8902709},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531984.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we extend the convolutional NMF with the beta-divergence as cost function to two dimensions and derive exact multiplicative updates for its factors. Our updates correct and generalize the nonnegative matrix factor deconvolution, as proposed by Schmidt and Mφrup. We prove that the cost is non-increasing under the new updates for beta between 0 and 2. By numerical simulation we confirm that both the cost's mean and standard deviation are monotonically decreasing in a consistent manner across the most common values for beta.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Clean speech AE-DNN PSD constraint for MCLP based reverberant speech enhancement.\n \n \n \n \n\n\n \n Chetupalli, S. R.; and Sreenivas, T. V.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CleanPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902710,\n  author = {S. R. Chetupalli and T. V. Sreenivas},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Clean speech AE-DNN PSD constraint for MCLP based reverberant speech enhancement},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Blind inverse filtering using multi-channel linear prediction (MCLP) in short-time Fourier transform (STFT) domain is an effective means to enhance reverberant speech. Traditionally, a speech power spectral density (PSD) weighted prediction error (WPE) minimization approach is used to estimate the prediction filters, independently in each frequency bin. The method is sensitive to the estimation of desired signal PSD. In this paper, we propose an auto-encoder (AE) deep neural network (DNN) based constraint for the estimation of desired signal PSD. An auto encoder trained on clean speech STFT coefficients is used as the prior to non-linearly map the natural speech PSD. We explore two different architectures for the auto-encoder: (i) fully-connected (FC) feed-forward, and (ii) recurrent long short-term memory (LSTM) architecture. Experiments using real room impulse responses show that the LSTM-DNN based PSD estimate performs better than the traditional methods for reverberant signal enhancement.},\n  keywords = {feedforward neural nets;filtering theory;Fourier transforms;minimisation;recurrent neural nets;reverberation;speech enhancement;transient response;reverberant signal enhancement;clean speech AE-DNN PSD constraint;MCLP based reverberant speech enhancement;blind inverse filtering;multichannel linear prediction;speech power spectral density weighted prediction error minimization approach;prediction filters;desired signal PSD;auto-encoder deep neural network based constraint;clean speech STFT coefficients;natural speech PSD;short-term memory architecture;LSTM-DNN;PSD estimate;Estimation;Microphones;Speech enhancement;Predictive models;Training;Europe;Dereverberation;Multi channel linear prediction;Auto encoder;Deep neural network;prior},\n  doi = {10.23919/EUSIPCO.2019.8902710},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529503.pdf},\n}\n\n
\n
\n\n\n
\n Blind inverse filtering using multi-channel linear prediction (MCLP) in short-time Fourier transform (STFT) domain is an effective means to enhance reverberant speech. Traditionally, a speech power spectral density (PSD) weighted prediction error (WPE) minimization approach is used to estimate the prediction filters, independently in each frequency bin. The method is sensitive to the estimation of desired signal PSD. In this paper, we propose an auto-encoder (AE) deep neural network (DNN) based constraint for the estimation of desired signal PSD. An auto encoder trained on clean speech STFT coefficients is used as the prior to non-linearly map the natural speech PSD. We explore two different architectures for the auto-encoder: (i) fully-connected (FC) feed-forward, and (ii) recurrent long short-term memory (LSTM) architecture. Experiments using real room impulse responses show that the LSTM-DNN based PSD estimate performs better than the traditional methods for reverberant signal enhancement.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Maximum-likelihood DOA estimation at low SNR in Laplace-like noise.\n \n \n \n \n\n\n \n Mecklenbräuker, C. F.; and Gerstoft, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Maximum-likelihoodPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902711,\n  author = {C. F. Mecklenbräuker and P. Gerstoft},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Maximum-likelihood DOA estimation at low SNR in Laplace-like noise},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We consider the estimation of the direction of arrivals (DOAs) of plane waves hidden in additive, mutually independent, complex circularly symmetric noise at very low signal to noise ratio (SNR). The maximum-likelihood estimator (ML) for the DOAs of deterministic signals carried by plane waves hidden in noise with a Laplace-like distribution is derived. This leads to a DOA estimator based on the Least Absolute Deviation (LAD) criterion. We prove analytically that a weighted phase-only beamformer (which evaluates the scalar product between the steering vector and the complex signum function of the observed array data) is an approximation to a beamformer based on the Least Absolute Deviation (LAD) criterion. The root mean squared error of DOA estimators versus SNR is compared in a simulation study: the conventional beamformer (CBF), the weighted phase-only beamformer, and sparse Bayesian learning (SBL3). This shows show that the ML estimator and weighted phase-only beamformer are well performing DOA estimators at low SNR for additive homoscedastic and heteroscedastic Gaussian noise, as well as Laplace-like noise.},\n  keywords = {array signal processing;direction-of-arrival estimation;Gaussian noise;maximum likelihood estimation;mean square error methods;maximum-likelihood DOA estimation;plane waves;complex circularly symmetric noise;maximum-likelihood estimator;Laplace-like distribution;DOA estimator;ML estimator;additive homoscedastic Gaussian noise;heteroscedastic Gaussian noise;least absolute deviation criterion;Direction-of-arrival estimation;Signal to noise ratio;Arrays;Maximum likelihood estimation;Additives;Gaussian noise;Maximum likelihood detection},\n  doi = {10.23919/EUSIPCO.2019.8902711},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531351.pdf},\n}\n\n
\n
\n\n\n
\n We consider the estimation of the direction of arrivals (DOAs) of plane waves hidden in additive, mutually independent, complex circularly symmetric noise at very low signal to noise ratio (SNR). The maximum-likelihood estimator (ML) for the DOAs of deterministic signals carried by plane waves hidden in noise with a Laplace-like distribution is derived. This leads to a DOA estimator based on the Least Absolute Deviation (LAD) criterion. We prove analytically that a weighted phase-only beamformer (which evaluates the scalar product between the steering vector and the complex signum function of the observed array data) is an approximation to a beamformer based on the Least Absolute Deviation (LAD) criterion. The root mean squared error of DOA estimators versus SNR is compared in a simulation study: the conventional beamformer (CBF), the weighted phase-only beamformer, and sparse Bayesian learning (SBL3). This shows show that the ML estimator and weighted phase-only beamformer are well performing DOA estimators at low SNR for additive homoscedastic and heteroscedastic Gaussian noise, as well as Laplace-like noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n End-to-end Audio Classification with Small Datasets – Making It Work.\n \n \n \n\n\n \n Schmitt, M.; and Schuller, B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902712,\n  author = {M. Schmitt and B. Schuller},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {End-to-end Audio Classification with Small Datasets – Making It Work},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Deep end-to-end learning is a promising approach for many types of audio classification tasks. However, in fields such as health care and medical diagnosis, training data can be scarce, which makes training a neural network from the raw waveform to the target a challenge. In this work, we focus on a public dataset of human snore sounds, categorised into four classes, where one particular class has only a few training samples. We emphasise the pitfalls that need to be taken into account when working with such data and propose an end-to-end model providing a performance similar to that of other deep and non-deep approaches. Furthermore, we show that a model using only convolutional layers outperforms a model employing also recurrent layers.},\n  keywords = {audio signal processing;learning (artificial intelligence);recurrent neural nets;signal classification;end-to-end audio classification;deep end-to-end learning;health care;medical diagnosis;training data;neural network training;raw waveform;public dataset;human snore sounds;convolutional layers;recurrent layers;Convolution;Training;Task analysis;Feature extraction;Neural networks;Acoustics;Training data;End-to-end learning;audio classification;representation learning;snore sounds;scarce data},\n  doi = {10.23919/EUSIPCO.2019.8902712},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Deep end-to-end learning is a promising approach for many types of audio classification tasks. However, in fields such as health care and medical diagnosis, training data can be scarce, which makes training a neural network from the raw waveform to the target a challenge. In this work, we focus on a public dataset of human snore sounds, categorised into four classes, where one particular class has only a few training samples. We emphasise the pitfalls that need to be taken into account when working with such data and propose an end-to-end model providing a performance similar to that of other deep and non-deep approaches. Furthermore, we show that a model using only convolutional layers outperforms a model employing also recurrent layers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Single ECG Lead-Based Oscillation Index for the Quantification of Periodic Breathing in Severe Heart Failure Patients.\n \n \n \n \n\n\n \n Guyot, P.; Djermoune, E. -.; Bastogne, T.; and Chenuel, B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902714,\n  author = {P. Guyot and E. -H. Djermoune and T. Bastogne and B. Chenuel},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Single ECG Lead-Based Oscillation Index for the Quantification of Periodic Breathing in Severe Heart Failure Patients},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Periodic breathing is a sleep-disordered breathing characterized by the alternation of central hypopneas/apneas and hyperventilation, and is associated with increased mortality in patients with severe heart failure in most studies. In this paper, we present a new strategy to detect mild to severe patterns of periodic breathing using a single electrocardiogram signal in patients with severe heart failure. We first compute three time series, extracted from the ECG signal namely Heart-Rate Variability, R-Wave Amplitude and Mean Cardiac Axis. Then, these series are used to estimate an oscillation index that can quantify periodic breathing through time and a one-minute decision is made using an experimental thresholding to decide whether periodic breathing is absent or present. Eight patients with normal to severe periodic breathing are selected to test our method. The results obtained are compared to those performed by experts.},\n  keywords = {diseases;electrocardiography;medical disorders;medical signal processing;pneumodynamics;signal classification;sleep;time series;mean cardiac axis;R-wave amplitude;heart-rate variability;single electrocardiogram signal;sleep-disordered breathing;heart failure patients;single ECG lead-based oscillation index;periodic breathing;Oscillators;Indexes;Electrocardiography;Ventilation;Heart;Sleep apnea;Estimation;Periodic breathing;Sinusoidal model;Cheyne-Stokes respiration;Electrocardiogram;Heart failure},\n  doi = {10.23919/EUSIPCO.2019.8902714},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532498.pdf},\n}\n\n
\n
\n\n\n
\n Periodic breathing is a sleep-disordered breathing characterized by the alternation of central hypopneas/apneas and hyperventilation, and is associated with increased mortality in patients with severe heart failure in most studies. In this paper, we present a new strategy to detect mild to severe patterns of periodic breathing using a single electrocardiogram signal in patients with severe heart failure. We first compute three time series, extracted from the ECG signal namely Heart-Rate Variability, R-Wave Amplitude and Mean Cardiac Axis. Then, these series are used to estimate an oscillation index that can quantify periodic breathing through time and a one-minute decision is made using an experimental thresholding to decide whether periodic breathing is absent or present. Eight patients with normal to severe periodic breathing are selected to test our method. The results obtained are compared to those performed by experts.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the Relation Between DOA-Vector Eigenbeam ESPRIT and Subspace Pseudointensity-Vector.\n \n \n \n \n\n\n \n Herzog, A.; and Habets, E. A. P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902715,\n  author = {A. Herzog and E. A. P. Habets},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {On the Relation Between DOA-Vector Eigenbeam ESPRIT and Subspace Pseudointensity-Vector},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Eigenbeam ESPRIT (EB-ESPRIT) is a subspace based method to estimate directions-of-arrival (DOAs) of sound sources from a spherical microphone array recording in the spherical harmonics domain (SHD). In recent works, nonsingular EB-ESPRIT methods have been proposed which can estimate the source DOA-vectors without ambiguities. In another recent publication, a subspace based pseudointensity-vector method (SS-PIV) has been proposed for DOA estimation. In this work, we derive the mathematical relation between the DOA-vector EB-ESPRIT and the SS-PIV method. We show that the SS-PIV can be seen as a special case of the DOA-vector EB-ESPRIT. Using this relation, we propose a novel DOA-estimator denoted as extended pseudointensity-vector (PIV). In the evaluation, we compare the DOA-vector EB-ESPRIT and the extended PIV with the SS-PIV and PIV under noisy and reverberant conditions.},\n  keywords = {acoustic signal processing;array signal processing;direction-of-arrival estimation;eigenvalues and eigenfunctions;microphone arrays;subspace pseudointensity-vector;nonsingular EB-ESPRIT methods;source DOA-vectors;subspace based pseudointensity-vector method;DOA estimation;DOA-vector EB-ESPRIT;SS-PIV method;novel DOA-estimator;extended pseudointensity-vector;DOA-vector eigenbeam ESPRIT;Direction-of-arrival estimation;Estimation;Microphone arrays;Harmonic analysis;Time-frequency analysis;Europe;Direction-of-arrival estimation;eigenbeam ES-PRIT;subspace pseudointensity-vector},\n  doi = {10.23919/EUSIPCO.2019.8902715},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529519.pdf},\n}\n\n
\n
\n\n\n
\n Eigenbeam ESPRIT (EB-ESPRIT) is a subspace based method to estimate directions-of-arrival (DOAs) of sound sources from a spherical microphone array recording in the spherical harmonics domain (SHD). In recent works, nonsingular EB-ESPRIT methods have been proposed which can estimate the source DOA-vectors without ambiguities. In another recent publication, a subspace based pseudointensity-vector method (SS-PIV) has been proposed for DOA estimation. In this work, we derive the mathematical relation between the DOA-vector EB-ESPRIT and the SS-PIV method. We show that the SS-PIV can be seen as a special case of the DOA-vector EB-ESPRIT. Using this relation, we propose a novel DOA-estimator denoted as extended pseudointensity-vector (PIV). In the evaluation, we compare the DOA-vector EB-ESPRIT and the extended PIV with the SS-PIV and PIV under noisy and reverberant conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A comparison of machine learning methods for detecting right whales from autonomous surface vehicles.\n \n \n \n \n\n\n \n Vickers, W.; Milner, B.; Lee, R.; and Lines, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902717,\n  author = {W. Vickers and B. Milner and R. Lee and J. Lines},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A comparison of machine learning methods for detecting right whales from autonomous surface vehicles},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This work compares a range of machine learning methods applied to the problem of detecting right whales from autonomous surface vehicles (ASV). Maximising detection accuracy is vital as is minimising processing requirements given the limitations of an ASV. This leads to an examination of the tradeoff between accuracy and processing requirements. Three broad types of machine learning methods are explored - convolution neural network (CNNs), time-domain methods and feature-based methods. CNNs are found to give best performance in terms of both detection accuracy and processing requirements. These were also tolerant to downsampling down to 1kHz which gave a slight improvement in accuracy as well as a significant reduction in processing time. This we attribute to the bandwidth of right whale calls which is around 250Hz and so downsampling is able to capture the sounds fully as well as removing unwanted noisy spectral regions.},\n  keywords = {audio signal processing;convolutional neural nets;learning (artificial intelligence);marine vehicles;object detection;remotely operated vehicles;machine learning methods;CNN;time-domain methods;feature-based methods;processing time;whale calls;whales;autonomous surface vehicles;ASV;detection accuracy;processing requirements;frequency 1.0 kHz;Whales;Time-frequency analysis;Feature extraction;Time-domain analysis;Spectrogram;Noise measurement;Machine learning;Cetacean detection;CNNs;machine learning;autonomous surface vehicles},\n  doi = {10.23919/EUSIPCO.2019.8902717},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533845.pdf},\n}\n\n
\n
\n\n\n
\n This work compares a range of machine learning methods applied to the problem of detecting right whales from autonomous surface vehicles (ASV). Maximising detection accuracy is vital as is minimising processing requirements given the limitations of an ASV. This leads to an examination of the tradeoff between accuracy and processing requirements. Three broad types of machine learning methods are explored - convolution neural network (CNNs), time-domain methods and feature-based methods. CNNs are found to give best performance in terms of both detection accuracy and processing requirements. These were also tolerant to downsampling down to 1kHz which gave a slight improvement in accuracy as well as a significant reduction in processing time. This we attribute to the bandwidth of right whale calls which is around 250Hz and so downsampling is able to capture the sounds fully as well as removing unwanted noisy spectral regions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast Direct Detection of Accelerating Radar Targets.\n \n \n \n \n\n\n \n Sirianunpiboon, S.; Howard, S. D.; and Elton, S. D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902718,\n  author = {S. Sirianunpiboon and S. D. Howard and S. D. Elton},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast Direct Detection of Accelerating Radar Targets},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper describes a group theoretic method for the detection of accelerating targets in both active and passive radar applications. The method directly produces a two dimensional range-Doppler rate map by utilizing multiple time and frequency shifts of the slow time data to structure the problem as one of detection of a multi-channel unknown rank-one component in noise. Our technique provides considerable computation saving when compared to the optimal method of computing and searching the three dimensional range-Doppler-Doppler rate map.},\n  keywords = {Doppler radar;group theory;object detection;passive radar;radar detection;active radar applications;passive radar applications;optimal method;group theoretic method;accelerating radar target detection;two dimensional range-Doppler rate map;multichannel unknown rank-one component detection;three dimensional range-Doppler-Doppler rate map;Acceleration;Doppler effect;Chirp;Doppler radar;Radar detection;Time series analysis;Accelerating radar targets;range-acceleration processing;range-Doppler migration;accelerating target detection},\n  doi = {10.23919/EUSIPCO.2019.8902718},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533441.pdf},\n}\n\n
\n
\n\n\n
\n This paper describes a group theoretic method for the detection of accelerating targets in both active and passive radar applications. The method directly produces a two dimensional range-Doppler rate map by utilizing multiple time and frequency shifts of the slow time data to structure the problem as one of detection of a multi-channel unknown rank-one component in noise. Our technique provides considerable computation saving when compared to the optimal method of computing and searching the three dimensional range-Doppler-Doppler rate map.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Two-Way MIMO Decode-and-Forward Relaying Systems with Tensor Space-Time Coding.\n \n \n \n \n\n\n \n d. C. Freitas, W.; Favier, G.; de Almeida , A. L. F.; and Haardt, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Two-WayPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902719,\n  author = {W. d. C. Freitas and G. Favier and A. L. F. {de Almeida} and M. Haardt},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Two-Way MIMO Decode-and-Forward Relaying Systems with Tensor Space-Time Coding},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we present a new closed-form semiblind receiver for a two-way decode-and-forward (DF) relaying system. The proposed receiver jointly estimates the symbol and channel matrices involved in the two-way relaying system by exploiting tensor structures of the received signals at the relay and the destination, without using training sequences. The proposed receiver exploits a cross-coding approach using a third-order tensor space-time code (TSTC) at the relay, and it does not require a channel reciprocity between uplink and downlink phases, which can be of interest in frequency division duplex relaying systems. The advantages of this DF receiver compared with the amplify-and-forward (AF) receivers of [14] are three-fold: 1) use of the DF protocol which makes it possible to attenuate the propagation errors compared to the AF protocol, at the cost of an additional decoding at the relay, 2) a cross-coding approach which allows the suppression of interference between sources and therefore greatly simplifies the receivers, and 3) the closed-form aspect of the receivers based on a least squares (LSs) Kronecker product factorization algorithm. Parameter identifiability and computational complexity are analysed, and simulation results are provided to corroborate the effectiveness of the proposed semiblind receiver and coding scheme when compared with the AF receivers of [11].},\n  keywords = {channel coding;channel estimation;computational complexity;decode and forward communication;least squares approximations;matrix algebra;MIMO communication;protocols;radio receivers;relay networks (telecommunication);space-time codes;tensors;wireless channels;tensor space-time coding;closed-form semiblind receiver;channel matrices;two-way relaying system;tensor structures;received signals;cross-coding approach;third-order tensor space-time code;channel reciprocity;downlink phases;frequency division duplex;DF receiver;DF protocol;AF protocol;additional decoding;closed-form aspect;coding scheme;AF receivers;Receivers;Relays;Tensors;MIMO communication;Channel estimation;Uplink;Downlink;Semi-blind receiver;block Tucker model;cooperative communications;tensor space-time coding.},\n  doi = {10.23919/EUSIPCO.2019.8902719},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533915.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present a new closed-form semiblind receiver for a two-way decode-and-forward (DF) relaying system. The proposed receiver jointly estimates the symbol and channel matrices involved in the two-way relaying system by exploiting tensor structures of the received signals at the relay and the destination, without using training sequences. The proposed receiver exploits a cross-coding approach using a third-order tensor space-time code (TSTC) at the relay, and it does not require a channel reciprocity between uplink and downlink phases, which can be of interest in frequency division duplex relaying systems. The advantages of this DF receiver compared with the amplify-and-forward (AF) receivers of [14] are three-fold: 1) use of the DF protocol which makes it possible to attenuate the propagation errors compared to the AF protocol, at the cost of an additional decoding at the relay, 2) a cross-coding approach which allows the suppression of interference between sources and therefore greatly simplifies the receivers, and 3) the closed-form aspect of the receivers based on a least squares (LSs) Kronecker product factorization algorithm. Parameter identifiability and computational complexity are analysed, and simulation results are provided to corroborate the effectiveness of the proposed semiblind receiver and coding scheme when compared with the AF receivers of [11].\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On Interference Pikes in Poisson Networks.\n \n \n \n \n\n\n \n Atiq, M. K.; Schilcher, U.; and Bettstetterl, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902721,\n  author = {M. K. Atiq and U. Schilcher and C. Bettstetterl},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {On Interference Pikes in Poisson Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The temporal dynamics of interference power is studied in Poisson networks with slotted random access and Rayleigh fading. Specifically, we analyze the occurrence of high interference events (pikes) and the time in between (valleys). Our main insight is that pikes arrive in bursts. This observation may help in the design of reliability techniques.},\n  keywords = {radiofrequency interference;Rayleigh channels;stochastic processes;telecommunication network reliability;interference pikes;Poisson networks;temporal dynamics;interference power;slotted random access;Rayleigh fading;high interference events;reliability techniques;Interference;Rayleigh channels;Europe;Coherence time;Correlation;Interference;fading;Poisson networks;wireless},\n  doi = {10.23919/EUSIPCO.2019.8902721},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529393.pdf},\n}\n\n
\n
\n\n\n
\n The temporal dynamics of interference power is studied in Poisson networks with slotted random access and Rayleigh fading. Specifically, we analyze the occurrence of high interference events (pikes) and the time in between (valleys). Our main insight is that pikes arrive in bursts. This observation may help in the design of reliability techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n optimization-based Synthesis of Time-Modulated Arrays with Accurate Time-Frequency Analysis.\n \n \n \n \n\n\n \n Poli, L.; Salucci, M.; Masotti, D.; and Rocca, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-4, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"optimization-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902724,\n  author = {L. Poli and M. Salucci and D. Masotti and P. Rocca},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {optimization-based Synthesis of Time-Modulated Arrays with Accurate Time-Frequency Analysis},\n  year = {2019},\n  pages = {1-4},\n  abstract = {The synthesis of time-modulated arrays (TMAs is carried out by means of a customized implementation of the Particle Swarm optimization (PSO), carefully taking into account in the analysis block the non-linear characteristics of the radio-frequency (RF) switches used in antenna beam-forming network and the mutual coupling effects between the array elements. A set of numerical results are reported and discussed in a comparative fashion with those obtained with a state-of-the-art method limited to deal with ideal TMAs.},\n  keywords = {antenna arrays;array signal processing;particle swarm optimisation;time-frequency analysis;time-modulated arrays;time-frequency analysis;Particle Swarm optimization;nonlinear characteristics;optimization-based synthesis;antenna beamforming network;Antenna arrays;Optimization;Particle swarm optimization;Radio frequency;Amplitude modulation;time-modulated linear arrays;particle swarm optimization;harmonic balance;directivity optimization},\n  doi = {10.23919/EUSIPCO.2019.8902724},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533547.pdf},\n}\n\n
\n
\n\n\n
\n The synthesis of time-modulated arrays (TMAs is carried out by means of a customized implementation of the Particle Swarm optimization (PSO), carefully taking into account in the analysis block the non-linear characteristics of the radio-frequency (RF) switches used in antenna beam-forming network and the mutual coupling effects between the array elements. A set of numerical results are reported and discussed in a comparative fashion with those obtained with a state-of-the-art method limited to deal with ideal TMAs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detecting Higher Order Genomic Variant Interactions with Spectral Analysis.\n \n \n \n \n\n\n \n Uminsky, D.; Banuelos, M.; González-Albino, L.; Garza, R.; and Nwakanma, S. A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DetectingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902725,\n  author = {D. Uminsky and M. Banuelos and L. González-Albino and R. Garza and S. A. Nwakanma},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Detecting Higher Order Genomic Variant Interactions with Spectral Analysis},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Genomic variations among a species consisting of one nucleotide change are known as single nucleotide polymorphisms (SNPs). Often these mutations result in a change in phenotype, but detecting higher order interaction of multiple SNPs remains a challenging problem. Common approaches to find groups of interacting SNPs associated with a phenotypic response, a problem under the umbrella of epistasis, often suffers from a combinatorial explosion and require Bonferroni or similar corrections. In this work, we develop and apply a novel Fourier transformation on the symmetric group to uncover higher order interactions of SNPs associated with a quantitative phenotypic response. We present results for simulated data and then apply our method to previously published data to detect, for the first time using a signal processing approach, new and statistically significant higher order SNP interaction phenotypes related to muscle mice genomic variants.},\n  keywords = {bioinformatics;cellular biophysics;genetics;genomics;molecular biophysics;polymorphism;spectral analysis;statistical analysis;higher order genomic variant interactions;spectral analysis;genomic variations;nucleotide change;single nucleotide polymorphisms;phenotype;higher order interaction;multiple SNPs;interacting SNPs;quantitative phenotypic response;statistically significant higher order SNP interaction phenotypes;muscle mice genomic variants;Fourier transforms;Signal processing;Genomics;Bioinformatics;Mice;Europe;Spectral analysis;Fourier transform;algebraic signal processing;epistasis;genomic variation},\n  doi = {10.23919/EUSIPCO.2019.8902725},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534029.pdf},\n}\n\n
\n
\n\n\n
\n Genomic variations among a species consisting of one nucleotide change are known as single nucleotide polymorphisms (SNPs). Often these mutations result in a change in phenotype, but detecting higher order interaction of multiple SNPs remains a challenging problem. Common approaches to find groups of interacting SNPs associated with a phenotypic response, a problem under the umbrella of epistasis, often suffers from a combinatorial explosion and require Bonferroni or similar corrections. In this work, we develop and apply a novel Fourier transformation on the symmetric group to uncover higher order interactions of SNPs associated with a quantitative phenotypic response. We present results for simulated data and then apply our method to previously published data to detect, for the first time using a signal processing approach, new and statistically significant higher order SNP interaction phenotypes related to muscle mice genomic variants.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Change Prediction for Low Complexity Combined Beamforming and Acoustic Echo Cancellation.\n \n \n \n \n\n\n \n Schrammen, M.; Bohlender, A.; Kühl, S.; and Jax, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ChangePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902726,\n  author = {M. Schrammen and A. Bohlender and S. Kühl and P. Jax},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Change Prediction for Low Complexity Combined Beamforming and Acoustic Echo Cancellation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Time-variant beamforming (BF) and acoustic echo cancellation (AEC) are two techniques that are frequently employed for improving the quality of hands-free speech communication. However, the combined application of both is quite challenging as it either introduces high computational complexity or insufficient tracking. We propose a new method to improve the performance of the low-complexity beamformer first (BF-first) structure, which we call change prediction(ChaP). ChaP gathers information on several BF changes to predict the effective impulse response seen by the AEC after the next BF change. To account for uncertain data and convergence states in the predictions, reliability measures are introduced to improve ChaP in realistic scenarios.},\n  keywords = {array signal processing;echo suppression;reliability;acoustic echo cancellation;time-variant beamforming;AEC;hands-free speech communication;computational complexity;ChaP;low-complexity beamformer first structure;low-complexity BF first structure;impulse response;reliability;Signal processing algorithms;Complexity theory;Reliability;Mathematical model;Echo cancellers;Discrete Fourier transforms;combined beamforming and acoustic echo cancellation;low complexity;beamformer first;pseudo inverse},\n  doi = {10.23919/EUSIPCO.2019.8902726},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531440.pdf},\n}\n\n
\n
\n\n\n
\n Time-variant beamforming (BF) and acoustic echo cancellation (AEC) are two techniques that are frequently employed for improving the quality of hands-free speech communication. However, the combined application of both is quite challenging as it either introduces high computational complexity or insufficient tracking. We propose a new method to improve the performance of the low-complexity beamformer first (BF-first) structure, which we call change prediction(ChaP). ChaP gathers information on several BF changes to predict the effective impulse response seen by the AEC after the next BF change. To account for uncertain data and convergence states in the predictions, reliability measures are introduced to improve ChaP in realistic scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Adaptive Sampling Technique for Graph Diffusion LMS Algorithm.\n \n \n \n \n\n\n \n Tiglea, D. G.; Candido, R.; and Silva, M. T. M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902727,\n  author = {D. G. Tiglea and R. Candido and M. T. M. Silva},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {An Adaptive Sampling Technique for Graph Diffusion LMS Algorithm},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Graph signal processing has attracted attention in the signal processing community, since it is an effective tool to deal with great quantities of interrelated data. Recently, a diffusion algorithm for adaptively learning from streaming graphs signals was proposed. However, it suffers from high computational cost since all nodes in the graph are sampled even in steady state. In this paper, we propose an adaptive sampling method for this solution that allows a reduction in computational cost in steady state, while maintaining convergence rate and presenting a slightly better steady-state performance. We also present an analysis to give insights about proper choices for its adaptation parameters.},\n  keywords = {graph theory;learning (artificial intelligence);least mean squares methods;signal processing;signal sampling;adaptive sampling technique;graph diffusion LMS algorithm;graph signal processing;great quantities;interrelated data;diffusion algorithm;graphs signals;high computational cost;adaptive sampling method;steady-state performance;adaptation parameters;Signal processing algorithms;Steady-state;Signal processing;Transient analysis;Convergence;Europe;Computational efficiency;Graph signal processing;sampling on graphs;diffusion strategies;graph filtering;convex combination},\n  doi = {10.23919/EUSIPCO.2019.8902727},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532607.pdf},\n}\n\n
\n
\n\n\n
\n Graph signal processing has attracted attention in the signal processing community, since it is an effective tool to deal with great quantities of interrelated data. Recently, a diffusion algorithm for adaptively learning from streaming graphs signals was proposed. However, it suffers from high computational cost since all nodes in the graph are sampled even in steady state. In this paper, we propose an adaptive sampling method for this solution that allows a reduction in computational cost in steady state, while maintaining convergence rate and presenting a slightly better steady-state performance. We also present an analysis to give insights about proper choices for its adaptation parameters.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Tone Mapped HDR Images Contrast Enhancement Using Piecewise Linear Perceptual Transformation.\n \n \n \n \n\n\n \n Thai, B. C.; and Mokraoui, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TonePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902728,\n  author = {B. C. Thai and A. Mokraoui},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Tone Mapped HDR Images Contrast Enhancement Using Piecewise Linear Perceptual Transformation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper addresses the conversion problem of High Dynamic Range (HDR) images into Low Dynamic Range (LDR) images. The aim of this conversion is to ensure a good visual rendering of the displayed Tone Mapped (TM) HDR images in accordance with the observers' assessment. To do so, the proposed algorithm adjusts a piecewise linear function to the logarithm luminance distribution of the HDR image to adapt this transformation to the perceptual quantizer in accordance with the Human Visual System (HVS). The computation of the slope value of the piecewise linear transformation is simplified. Indeed, it is given by the ratio between the probability and the logarithm luminance distance in the sampling bin. This leads to histograms which tend to be flat, thus inducing a contrast enhancement of the tone mapped HDR images in both under-exposed and overexposed areas. Simulation results provide good results, both in terms of visual quality and TMQI metric, compared to existing competitive TM approaches.},\n  keywords = {brightness;image colour analysis;image enhancement;piecewise linear techniques;quantisation (signal);rendering (computer graphics);statistical analysis;HDR image;perceptual quantizer;Human Visual System;piecewise linear transformation;logarithm luminance distance;Tone Mapped HDR images contrast enhancement;piecewise linear perceptual transformation;conversion problem;High Dynamic Range images;Low Dynamic Range images;visual rendering;displayed Tone Mapped HDR images;piecewise linear function;logarithm luminance distribution;Dynamic range;Visualization;Histograms;Image edge detection;Europe;Signal processing;High Dynamic Range image;Low Dynamic Range image;Tone mapping;Histogram;Contrast enhancement.},\n  doi = {10.23919/EUSIPCO.2019.8902728},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531994.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the conversion problem of High Dynamic Range (HDR) images into Low Dynamic Range (LDR) images. The aim of this conversion is to ensure a good visual rendering of the displayed Tone Mapped (TM) HDR images in accordance with the observers' assessment. To do so, the proposed algorithm adjusts a piecewise linear function to the logarithm luminance distribution of the HDR image to adapt this transformation to the perceptual quantizer in accordance with the Human Visual System (HVS). The computation of the slope value of the piecewise linear transformation is simplified. Indeed, it is given by the ratio between the probability and the logarithm luminance distance in the sampling bin. This leads to histograms which tend to be flat, thus inducing a contrast enhancement of the tone mapped HDR images in both under-exposed and overexposed areas. Simulation results provide good results, both in terms of visual quality and TMQI metric, compared to existing competitive TM approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Texture Superpixel Clustering from Patch-based Nearest Neighbor Matching.\n \n \n \n \n\n\n \n Giraud, R.; and Berthoumieu, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TexturePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902729,\n  author = {R. Giraud and Y. Berthoumieu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Texture Superpixel Clustering from Patch-based Nearest Neighbor Matching},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Superpixels are widely used in computer vision applications. Nevertheless, decomposition methods may still fail to efficiently cluster image pixels according to their local texture. In this paper, we propose a new Nearest Neighbor-based Superpixel Clustering (NNSC) method to generate texture-aware superpixels in a limited computational time compared to previous approaches. We introduce a new clustering framework using patch-based nearest neighbor matching, while most existing methods are based on a pixel-wise K-means clustering. Therefore, we directly group pixels in the patch space enabling to capture texture information. We demonstrate the efficiency of our method with favorable comparison in terms of segmentation performances on both standard color and texture datasets. We also show the computational efficiency of NNSC compared to recent texture-aware superpixel methods.},\n  keywords = {computer vision;image colour analysis;image resolution;image segmentation;image texture;nearest neighbour methods;pattern clustering;patch-based nearest neighbor matching;computer vision applications;decomposition methods;image pixels;standard color;texture superpixel clustering;texture-aware superpixel methods;nearest neighbor-based superpixel clustering method;Superpixels;Nearest Neighbor;Texture},\n  doi = {10.23919/EUSIPCO.2019.8902729},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533880.pdf},\n}\n\n
\n
\n\n\n
\n Superpixels are widely used in computer vision applications. Nevertheless, decomposition methods may still fail to efficiently cluster image pixels according to their local texture. In this paper, we propose a new Nearest Neighbor-based Superpixel Clustering (NNSC) method to generate texture-aware superpixels in a limited computational time compared to previous approaches. We introduce a new clustering framework using patch-based nearest neighbor matching, while most existing methods are based on a pixel-wise K-means clustering. Therefore, we directly group pixels in the patch space enabling to capture texture information. We demonstrate the efficiency of our method with favorable comparison in terms of segmentation performances on both standard color and texture datasets. We also show the computational efficiency of NNSC compared to recent texture-aware superpixel methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic Measurement of Speech Breathing Rate.\n \n \n \n \n\n\n \n K., M. I. Y. A.; and Routray, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902730,\n  author = {M. I. Y. A. K. and A. Routray},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic Measurement of Speech Breathing Rate},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The speech breathing rate has been used for the early prediction of disease and detection of emotions. Most of the breath detection equipment are contact based. Here, we try to detect the speech breathing rate from speech recordings. Cepstrogram matrix is used as the feature for classifying the speech frames as breath or non-breath. The classifier used is the support vector machine (SVM) with a radial basis function (RBF) kernel. The classifier output is post-processed to join breathing segments which are closely spaced and remove breaths of small duration. The speech breathing rate is calculated from the breath to breath interval. The algorithm has been tested on a student evaluation database. When tested, the algorithm yields an F1 Score of 89% and root mean square error (RMSE) of 4.5 breaths/min for the speech-breathing rate. The breath segments have been validated by keenly listening to speech recordings and viewing thermal videos.},\n  keywords = {diseases;mean square error methods;medical signal processing;pneumodynamics;radial basis function networks;spectral analysis;speech processing;support vector machines;breathing segments;speech breathing rate;breath segments;speech recordings;breath detection equipment;support vector machine;SVM;radial basis function;RBF kernel;root mean square error;Support vector machines;Videos;Mel frequency cepstral coefficient;Image segmentation;Indexes;Cameras;Training;Breath detection;cepstrogram;speech-breathing rate;SVM.},\n  doi = {10.23919/EUSIPCO.2019.8902730},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533534.pdf},\n}\n\n
\n
\n\n\n
\n The speech breathing rate has been used for the early prediction of disease and detection of emotions. Most of the breath detection equipment are contact based. Here, we try to detect the speech breathing rate from speech recordings. Cepstrogram matrix is used as the feature for classifying the speech frames as breath or non-breath. The classifier used is the support vector machine (SVM) with a radial basis function (RBF) kernel. The classifier output is post-processed to join breathing segments which are closely spaced and remove breaths of small duration. The speech breathing rate is calculated from the breath to breath interval. The algorithm has been tested on a student evaluation database. When tested, the algorithm yields an F1 Score of 89% and root mean square error (RMSE) of 4.5 breaths/min for the speech-breathing rate. The breath segments have been validated by keenly listening to speech recordings and viewing thermal videos.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Radio Positioning and Tracking of High-Speed Devices in 5G NR Networks: System Concept and Performance.\n \n \n \n \n\n\n \n Talvitie, J.; Koivisto, M.; Levanen, T.; Ihalainen, T.; Pajukoski, K.; and Valkama, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RadioPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902731,\n  author = {J. Talvitie and M. Koivisto and T. Levanen and T. Ihalainen and K. Pajukoski and M. Valkama},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Radio Positioning and Tracking of High-Speed Devices in 5G NR Networks: System Concept and Performance},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper addresses high-efficiency radio positioning and tracking of high-speed objects in emerging fifth generation (5G) new radio (NR) networks. Methods and system concept are described, building on network side reference signal measurements and data fusion and tracking. Also realistic performance results are provided and analyzed, in the context of mmWave NR deployment for high-speed trains at 30 GHz carrier frequency. It is shown that below 3 m positioning accuracy can be achieved with 95% availability, which fits in the positioning performance requirements for traffic monitoring and control, as specified by the 3rd generation partnership project (3GPP). Moreover, the given positioning performance can be achieved without assuming clock synchronization between the baseband units (BBUs) at the network side. In fact, the proposed approach enables sub-nanosecond estimation accuracy of network clock offsets, which benefits various radio resource management (RRM) functionalities of the network for increased spectral efficiency.},\n  keywords = {3G mobile communication;5G mobile communication;cellular radio;radionavigation;sensor fusion;synchronisation;target tracking;wireless channels;fifth generation new radio networks;reference signal measurements;data fusion;mmWave NR deployment;high-speed trains;30 GHz carrier frequency;3 m positioning accuracy;positioning performance requirements;traffic monitoring;3rd generation partnership project;3GPP;given positioning performance;network clock offsets;radio resource management functionalities;high-speed devices;5G NR networks;high-efficiency radio positioning;high-speed objects;frequency 30.0 GHz;size 3.0 m;5G mobile communication;Clocks;Estimation;Delays;Synchronization;3GPP;Propagation delay;Positioning;Synchronization;Fifth generation mobile networks;5G;New Radio;NR;High-speed trains;HST},\n  doi = {10.23919/EUSIPCO.2019.8902731},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532489.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses high-efficiency radio positioning and tracking of high-speed objects in emerging fifth generation (5G) new radio (NR) networks. Methods and system concept are described, building on network side reference signal measurements and data fusion and tracking. Also realistic performance results are provided and analyzed, in the context of mmWave NR deployment for high-speed trains at 30 GHz carrier frequency. It is shown that below 3 m positioning accuracy can be achieved with 95% availability, which fits in the positioning performance requirements for traffic monitoring and control, as specified by the 3rd generation partnership project (3GPP). Moreover, the given positioning performance can be achieved without assuming clock synchronization between the baseband units (BBUs) at the network side. In fact, the proposed approach enables sub-nanosecond estimation accuracy of network clock offsets, which benefits various radio resource management (RRM) functionalities of the network for increased spectral efficiency.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Receptive Field as a Regularizer in Deep Convolutional Neural Networks for Acoustic Scene Classification.\n \n \n \n \n\n\n \n Koutini, K.; Eghbal-zadeh, H.; Dorfer, M.; and Widmer, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902732,\n  author = {K. Koutini and H. Eghbal-zadeh and M. Dorfer and G. Widmer},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {The Receptive Field as a Regularizer in Deep Convolutional Neural Networks for Acoustic Scene Classification},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Convolutional Neural Networks (CNNs) have had great success in many machine vision as well as machine audition tasks. Many image recognition network architectures have consequently been adapted for audio processing tasks. However, despite some successes, the performance of many of these did not translate from the image to the audio domain. For example, very deep architectures such as ResNet [1] and DenseNet [2], which significantly outperform VGG [3] in image recognition, do not perform better in audio processing tasks such as Acoustic Scene Classification (ASC). In this paper, we investigate the reasons why such powerful architectures perform worse in ASC compared to simpler models (e.g., VGG). To this end, we analyse the receptive field (RF) of these CNNs and demonstrate the importance of the RF to the generalization capability of the models. Using our receptive field analysis, we adapt both ResNet and DenseNet, achieving state-of-the-art performance and eventually outperforming the VGG-based models. We introduce systematic ways of adapting the RF in CNNs, and present results on three data sets that show how changing the RF over the time and frequency dimensions affects a model's performance. Our experimental results show that very small or very large RFs can cause performance degradation, but deep models can be made to generalize well by carefully choosing an appropriate RF size within a certain range.},\n  keywords = {audio signal processing;computer vision;feature extraction;image classification;learning (artificial intelligence);neural nets;machine vision;machine audition tasks;image recognition network architectures;audio processing tasks;audio domain;deep architectures;ResNet;acoustic scene classification;ASC;powerful architectures;CNNs;receptive field analysis;VGG-based models;performance degradation;deep models;appropriate RF size;deep convolutional Neural Networks;Radio frequency;Computer architecture;Task analysis;Training;Acoustics;Neurons;Convolution;CNN;acoustic scene classification;deep learning;machine learning},\n  doi = {10.23919/EUSIPCO.2019.8902732},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529390.pdf},\n}\n\n
\n
\n\n\n
\n Convolutional Neural Networks (CNNs) have had great success in many machine vision as well as machine audition tasks. Many image recognition network architectures have consequently been adapted for audio processing tasks. However, despite some successes, the performance of many of these did not translate from the image to the audio domain. For example, very deep architectures such as ResNet [1] and DenseNet [2], which significantly outperform VGG [3] in image recognition, do not perform better in audio processing tasks such as Acoustic Scene Classification (ASC). In this paper, we investigate the reasons why such powerful architectures perform worse in ASC compared to simpler models (e.g., VGG). To this end, we analyse the receptive field (RF) of these CNNs and demonstrate the importance of the RF to the generalization capability of the models. Using our receptive field analysis, we adapt both ResNet and DenseNet, achieving state-of-the-art performance and eventually outperforming the VGG-based models. We introduce systematic ways of adapting the RF in CNNs, and present results on three data sets that show how changing the RF over the time and frequency dimensions affects a model's performance. Our experimental results show that very small or very large RFs can cause performance degradation, but deep models can be made to generalize well by carefully choosing an appropriate RF size within a certain range.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analysis of nematodes in coffee crops at different altitudes using aerial images.\n \n \n \n \n\n\n \n Oliveira, A. J.; Assis, G. A.; Faria, E. R.; Souza, J. R.; Vivaldini, K. C. T.; Guizilini, V.; Ramos, F.; Mendes, C. T. C.; and Wolf, D. F.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AnalysisPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902734,\n  author = {A. J. Oliveira and G. A. Assis and E. R. Faria and J. R. Souza and K. C. T. Vivaldini and V. Guizilini and F. Ramos and C. T. C. Mendes and D. F. Wolf},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Analysis of nematodes in coffee crops at different altitudes using aerial images},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Precision agriculture presents several challenges, amongst them the detection of diseases and pests in agricultural environments. This paper describes a methodology capable of detecting the presence of the nematode pest in coffee crops and also analyzing the behavior of this pest in several altitudes using aerial images. An Unmanned Aerial Vehicle (UAV) is used to obtain high-resolution RGB images of a Brazilian coffee farm. The proposed methodology uses Convolutional Neural Networks (CNN) with U-Net and PSPNet architectures to classify areas into two classes: pests and non-pests. Results demonstrate the viability of the proposed methodology, with an average F-measure of 0.69 for the U-Net architecture with the image resolution 640 × 480.},\n  keywords = {autonomous aerial vehicles;convolutional neural nets;crops;image colour analysis;image resolution;pest control;plant diseases;convolutional neural networks;image resolution;coffee crops;aerial images;precision agriculture;diseases;agricultural environments;nematode pest;unmanned aerial vehicle;Brazilian coffee farm;U-Net;PSPNet architectures;Agriculture;Convolution;Training;Diseases;Soil;Image resolution;Unmanned aerial vehicles;CNN;Nematodes;Coffee Crops;UAV;Altitudes.},\n  doi = {10.23919/EUSIPCO.2019.8902734},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533961.pdf},\n}\n\n
\n
\n\n\n
\n Precision agriculture presents several challenges, amongst them the detection of diseases and pests in agricultural environments. This paper describes a methodology capable of detecting the presence of the nematode pest in coffee crops and also analyzing the behavior of this pest in several altitudes using aerial images. An Unmanned Aerial Vehicle (UAV) is used to obtain high-resolution RGB images of a Brazilian coffee farm. The proposed methodology uses Convolutional Neural Networks (CNN) with U-Net and PSPNet architectures to classify areas into two classes: pests and non-pests. Results demonstrate the viability of the proposed methodology, with an average F-measure of 0.69 for the U-Net architecture with the image resolution 640 × 480.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Large-scale Pollen Recognition with Deep Learning.\n \n \n \n \n\n\n \n d. Geus, A. R.; Barcelos, C. A. Z.; Batista, M. A.; and d. Silva, S. F.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Large-scalePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902735,\n  author = {A. R. d. Geus and C. A. Z. Barcelos and M. A. Batista and S. F. d. Silva},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Large-scale Pollen Recognition with Deep Learning},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Pollen recognition has been shown to be important for a number of areas ranging from criminal investigations to paleoclimate studies. However, these palynology studies rely on highly qualified professionals to analyze pollen grains, which have become scarce and costly. Therefore, the automation of this task using computational methods is promising. Deep learning has proven to be the ultimate technique in computer vision tasks, but is very difficult to build a pollen data set with size enough to train such networks from scratch. This study investigated the use of transfer learning from pre-trained deep neural networks for pollen classification and compared their results with training from scratch and with promising predesigned features. Additionally, we introduced the biggest data set of pollen to the date. Experimental results achieved up to 96.24% of classification accuracy, suggesting that the fine-tuned deep learning architectures can be successfully applied to pollen classification.},\n  keywords = {computer vision;feature extraction;image classification;learning (artificial intelligence);neural nets;palynology studies;pollen grains;computer vision;transfer learning;deep neural networks;pollen classification;pollen recognition;paleoclimate;deep learning architectures;Feature extraction;Training;Reactive power;Deep learning;Europe;Signal processing;Image color analysis;Pollen recognition;convolutional neural networks;deep learning;transfer learning},\n  doi = {10.23919/EUSIPCO.2019.8902735},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533839.pdf},\n}\n\n
\n
\n\n\n
\n Pollen recognition has been shown to be important for a number of areas ranging from criminal investigations to paleoclimate studies. However, these palynology studies rely on highly qualified professionals to analyze pollen grains, which have become scarce and costly. Therefore, the automation of this task using computational methods is promising. Deep learning has proven to be the ultimate technique in computer vision tasks, but is very difficult to build a pollen data set with size enough to train such networks from scratch. This study investigated the use of transfer learning from pre-trained deep neural networks for pollen classification and compared their results with training from scratch and with promising predesigned features. Additionally, we introduced the biggest data set of pollen to the date. Experimental results achieved up to 96.24% of classification accuracy, suggesting that the fine-tuned deep learning architectures can be successfully applied to pollen classification.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Open-Set Recognition Using Intra-Class Splitting.\n \n \n \n \n\n\n \n Schlachter, P.; Liao, Y.; and Yang, B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Open-SetPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902738,\n  author = {P. Schlachter and Y. Liao and B. Yang},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Open-Set Recognition Using Intra-Class Splitting},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper proposes a method to use deep neural networks as end-to-end open-set classifiers. It is based on intraclass data splitting. In open-set recognition, only samples from a limited number of known classes are available for training. During inference, an open-set classifier must reject samples from unknown classes while correctly classifying samples from known classes. The proposed method splits given data into typical and atypical normal subsets by using a closed-set classifier. This enables to model the abnormal classes by atypical normal samples. Accordingly, the open-set recognition problem is reformulated into a traditional classification problem. In addition, a closed-set regularization is proposed to guarantee a high closed-set classification performance. Intensive experiments on five well-known image datasets showed the effectiveness of the proposed method which outperformed the baselines and achieved a distinct improvement over the state-of-the-art methods.},\n  keywords = {image classification;learning (artificial intelligence);neural nets;set theory;open-set recognition;recognition problem;classification problem;closed-set regularization;image datasets;deep neural networks;end-to-end open-set classifiers;intraclass data splitting;closed-set classification performance;Training;Biological neural networks;Signal processing;Generative adversarial networks;Gallium nitride;Europe;Open-Set Recognition;Intra-Class Splitting;Deep Learning},\n  doi = {10.23919/EUSIPCO.2019.8902738},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533725.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a method to use deep neural networks as end-to-end open-set classifiers. It is based on intraclass data splitting. In open-set recognition, only samples from a limited number of known classes are available for training. During inference, an open-set classifier must reject samples from unknown classes while correctly classifying samples from known classes. The proposed method splits given data into typical and atypical normal subsets by using a closed-set classifier. This enables to model the abnormal classes by atypical normal samples. Accordingly, the open-set recognition problem is reformulated into a traditional classification problem. In addition, a closed-set regularization is proposed to guarantee a high closed-set classification performance. Intensive experiments on five well-known image datasets showed the effectiveness of the proposed method which outperformed the baselines and achieved a distinct improvement over the state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast Basis Function Estimators for Identification of Nonstationary Stochastic Processes.\n \n \n \n \n\n\n \n Niedźwiecki, M.; Ciołek, M.; and Gańcza, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902739,\n  author = {M. Niedźwiecki and M. Ciołek and A. Gańcza},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast Basis Function Estimators for Identification of Nonstationary Stochastic Processes},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The problem of identification of a linear nonstationary stochastic process is considered and solved using the approach based on functional series approximation of time-varying parameter trajectories. The proposed fast basis function estimators are computationally attractive and yield results that are better than those provided by the local least squares algorithms. It is shown that two important design parameters - the number of basis functions and the size of the local analysis interval - can be selected on-line in an adaptive way.},\n  keywords = {least squares approximations;radial basis function networks;stochastic processes;time-varying systems;functional series approximation;time-varying parameter trajectories;fast basis function estimators;linear nonstationary stochastic process identification;least squares algorithms;local analysis interval;Trajectory;Signal processing algorithms;Estimation;Manganese;Microsoft Windows;Europe;Signal processing;Identification of nonstationary processes;basis function estimators;adaptive estimation},\n  doi = {10.23919/EUSIPCO.2019.8902739},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528239.pdf},\n}\n\n
\n
\n\n\n
\n The problem of identification of a linear nonstationary stochastic process is considered and solved using the approach based on functional series approximation of time-varying parameter trajectories. The proposed fast basis function estimators are computationally attractive and yield results that are better than those provided by the local least squares algorithms. It is shown that two important design parameters - the number of basis functions and the size of the local analysis interval - can be selected on-line in an adaptive way.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectral Complexity Reduction of Music Signals for Cochlear Implant Users based on Subspace Tracking.\n \n \n \n \n\n\n \n Gauer, J.; Krymova, E.; Belomestny, D.; and Martin, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SpectralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902740,\n  author = {J. Gauer and E. Krymova and D. Belomestny and R. Martin},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Spectral Complexity Reduction of Music Signals for Cochlear Implant Users based on Subspace Tracking},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Spectral complexity reduction can be used to emphasize the leading voice or melody and attenuate the competing accompaniment of music pieces. This method is known to facilitate music perception in cochlear implant (CI) users as spectrally less complex signals are perceived as being more pleasant. In this paper we investigate a method to obtain a reduced-rank approximation for the desired complexity reduction that extends the established projection approximation subspace tracking methods (PAST, CPAST) with an additional sparsity constraint. We evaluate our method with the instrumental SIR and SAR measures as well as an auditory distortion measure (ADR) on a database of 110 classical chamber music pieces. While the resulting signal quality is found to be comparable to existing methods the iterative structure and the reduced computational complexity of our method make it suitable for real-time and low-latency on-line applications.},\n  keywords = {approximation theory;cochlear implants;computational complexity;handicapped aids;iterative methods;music;signal processing;spectral complexity reduction;music signals;cochlear implant users;leading voice;music perception;spectrally less complex signals;reduced-rank approximation;established projection approximation subspace tracking methods;signal quality;computational complexity;classical chamber music pieces;Complexity theory;Multiple signal classification;Music;Covariance matrices;Indexes;Instruments;Principal component analysis;Subspace Tracking;Music Signal Processing;Cochlear Implants;Sparse Eigenvectors},\n  doi = {10.23919/EUSIPCO.2019.8902740},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532126.pdf},\n}\n\n
\n
\n\n\n
\n Spectral complexity reduction can be used to emphasize the leading voice or melody and attenuate the competing accompaniment of music pieces. This method is known to facilitate music perception in cochlear implant (CI) users as spectrally less complex signals are perceived as being more pleasant. In this paper we investigate a method to obtain a reduced-rank approximation for the desired complexity reduction that extends the established projection approximation subspace tracking methods (PAST, CPAST) with an additional sparsity constraint. We evaluate our method with the instrumental SIR and SAR measures as well as an auditory distortion measure (ADR) on a database of 110 classical chamber music pieces. While the resulting signal quality is found to be comparable to existing methods the iterative structure and the reduced computational complexity of our method make it suitable for real-time and low-latency on-line applications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic Chord Estimation Based on a Frame-wise Convolutional Recurrent Neural Network with Non-Aligned Annotations.\n \n \n \n \n\n\n \n Wu, Y.; Carsault, T.; and Yoshii, K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902741,\n  author = {Y. Wu and T. Carsault and K. Yoshii},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic Chord Estimation Based on a Frame-wise Convolutional Recurrent Neural Network with Non-Aligned Annotations},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper describes a weakly-supervised approach to Automatic Chord Estimation (ACE) task that aims to estimate a sequence of chords from a given music audio signal at the frame level, under a realistic condition that only non-aligned chord annotations are available. In conventional studies assuming the availability of time-aligned chord annotations, Deep Neural Networks (DNNs) that learn frame-wise mappings from acoustic features to chords have attained excellent performance. The major drawback of such frame-wise models is that they cannot be trained without the time alignment information. Inspired by a common approach in automatic speech recognition based on nonaligned speech transcriptions, we propose a two-step method that trains a Hidden Markov Model (HMM) for the forced alignment between chord annotations and music signals, and then trains a powerful frame-wise DNN model for ACE. Experimental results show that although the frame-level accuracy of the forced alignment was just under 90%, the performance of the proposed method was degraded only slightly from that of the DNN model trained by using the ground-truth alignment data. Furthermore, using a sufficient amount of easily collected non-aligned data, the proposed method is able to reach or even outperform the conventional methods based on ground-truth time-aligned annotations.},\n  keywords = {audio signal processing;convolutional neural nets;hidden Markov models;learning (artificial intelligence);music;recurrent neural nets;signal classification;speech recognition;frame level;nonaligned chord annotations;time-aligned chord annotations;Deep Neural Networks;frame-wise mappings;frame-wise models;time alignment information;automatic speech recognition;nonaligned speech transcriptions;Hidden Markov Model;forced alignment;music signals;frame-level accuracy;ground-truth alignment data;ground-truth time-aligned annotations;frame-wise convolutional recurrent Neural network;weakly-supervised approach;Automatic Chord Estimation task;music audio signal;frame-wise DNN model;Hidden Markov models;Annotations;Feature extraction;Multiple signal classification;Training;Estimation;Convolution;Automatic chord estimation;forced alignment;HMM;CNN;and RNN},\n  doi = {10.23919/EUSIPCO.2019.8902741},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533035.pdf},\n}\n\n
\n
\n\n\n
\n This paper describes a weakly-supervised approach to Automatic Chord Estimation (ACE) task that aims to estimate a sequence of chords from a given music audio signal at the frame level, under a realistic condition that only non-aligned chord annotations are available. In conventional studies assuming the availability of time-aligned chord annotations, Deep Neural Networks (DNNs) that learn frame-wise mappings from acoustic features to chords have attained excellent performance. The major drawback of such frame-wise models is that they cannot be trained without the time alignment information. Inspired by a common approach in automatic speech recognition based on nonaligned speech transcriptions, we propose a two-step method that trains a Hidden Markov Model (HMM) for the forced alignment between chord annotations and music signals, and then trains a powerful frame-wise DNN model for ACE. Experimental results show that although the frame-level accuracy of the forced alignment was just under 90%, the performance of the proposed method was degraded only slightly from that of the DNN model trained by using the ground-truth alignment data. Furthermore, using a sufficient amount of easily collected non-aligned data, the proposed method is able to reach or even outperform the conventional methods based on ground-truth time-aligned annotations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Blind Calibration of Sensor Arrays for Narrowband Signals with Asymptotically Optimal Weighting.\n \n \n \n \n\n\n \n Weiss, A.; and Yeredor, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"BlindPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902742,\n  author = {A. Weiss and A. Yeredor},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Blind Calibration of Sensor Arrays for Narrowband Signals with Asymptotically Optimal Weighting},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We revisit the problem of blind calibration of uniform linear sensors arrays for narrowband signals and set the premises for the derivation of the optimal blind calibration scheme. In particular, instead of taking the direct (rather involved) Maximum Likelihood (ML) approach for joint estimation of all the unknown model parameters, we follow Paulraj and Kailath's classical approach in exploiting the special (Toeplitz) structure of the observed covariance. However, we offer a substantial improvement over Paulraj and Kailath's Least Squares (LS) estimate by using asymptotic approximations in order to obtain simple, (quasi-)linear Weighted LS (WLS) estimates of the sensors' gains and phases offsets with asymptotically optimal weighting. As we show in simulation experiments, our WLS estimates exhibit near-optimal performance, with a considerable improvement (reaching an order of magnitude and more) in the resulting mean squared errors, w.r.t. the corresponding ordinary LS estimates. We also briefly explain how the methodology derived in this work may be utilized in order to obtain (by certain modifications) the asymptotically optimal ML estimates w.r.t. the raw data via a (quasi)-linear WLS estimate.},\n  keywords = {array signal processing;calibration;correlation methods;covariance matrices;direction-of-arrival estimation;least squares approximations;maximum likelihood estimation;sensor arrays;special structure;Paulraj;Kailath's Least Squares estimate;asymptotic approximations;linear Weighted LS;asymptotically optimal weighting;WLS estimates;near-optimal performance;resulting mean squared errors;asymptotically optimal ML;sensor arrays;narrowband signals;uniform linear sensors arrays;optimal blind calibration scheme;direct Maximum Likelihood approach;joint estimation;unknown model parameters;Kailath's classical approach;Maximum likelihood estimation;Covariance matrices;OWL;Calibration;Sensor arrays;Phased arrays;Sensor array processing;gain estimation;phase estimation;self-calibration;weighted least squares.},\n  doi = {10.23919/EUSIPCO.2019.8902742},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533305.pdf},\n}\n\n
\n
\n\n\n
\n We revisit the problem of blind calibration of uniform linear sensors arrays for narrowband signals and set the premises for the derivation of the optimal blind calibration scheme. In particular, instead of taking the direct (rather involved) Maximum Likelihood (ML) approach for joint estimation of all the unknown model parameters, we follow Paulraj and Kailath's classical approach in exploiting the special (Toeplitz) structure of the observed covariance. However, we offer a substantial improvement over Paulraj and Kailath's Least Squares (LS) estimate by using asymptotic approximations in order to obtain simple, (quasi-)linear Weighted LS (WLS) estimates of the sensors' gains and phases offsets with asymptotically optimal weighting. As we show in simulation experiments, our WLS estimates exhibit near-optimal performance, with a considerable improvement (reaching an order of magnitude and more) in the resulting mean squared errors, w.r.t. the corresponding ordinary LS estimates. We also briefly explain how the methodology derived in this work may be utilized in order to obtain (by certain modifications) the asymptotically optimal ML estimates w.r.t. the raw data via a (quasi)-linear WLS estimate.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dual Function FH MIMO Radar System with DPSK Signal Embedding.\n \n \n \n \n\n\n \n Eedara, I. P.; and Amin, M. G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DualPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902743,\n  author = {I. P. Eedara and M. G. Amin},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Dual Function FH MIMO Radar System with DPSK Signal Embedding},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we propose a method for information embedding into the emission of frequency hopping (FH) multiple-input multiple-output (MIMO) radar. The differential phase shift keying (DPSK) modulated communication symbols are embedded into each pulse of the FH radar waveforms. We examine the effect of DPSK symbol embedding on the radar operation by analyzing the range sidelobes performance, power spectral density (PSD) and data rate of the system. The proposed system shows significant reduction in range sidelobes, good spectral containment and achieves high communication data rates. The latter is enabled by synthesizing a large number of orthogonal waveforms.},\n  keywords = {correlation methods;differential phase shift keying;frequency hop communication;MIMO radar;modulation coding;radar signal processing;spread spectrum radar;data rate;high communication data rates;dual function FH MIMO radar system;DPSK signal;information embedding;differential phase shift;FH radar waveforms;DPSK symbol;radar operation;range sidelobes performance;Differential phase shift keying;MIMO radar;Communication symbols;Radar antennas;Receivers},\n  doi = {10.23919/EUSIPCO.2019.8902743},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533979.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a method for information embedding into the emission of frequency hopping (FH) multiple-input multiple-output (MIMO) radar. The differential phase shift keying (DPSK) modulated communication symbols are embedded into each pulse of the FH radar waveforms. We examine the effect of DPSK symbol embedding on the radar operation by analyzing the range sidelobes performance, power spectral density (PSD) and data rate of the system. The proposed system shows significant reduction in range sidelobes, good spectral containment and achieves high communication data rates. The latter is enabled by synthesizing a large number of orthogonal waveforms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n iTM-Net: Deep Inverse Tone Mapping Using Novel Loss Function Based on Tone Mapping Operator.\n \n \n \n \n\n\n \n Kinoshita, Y.; and Kiya, H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"iTM-Net:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902744,\n  author = {Y. Kinoshita and H. Kiya},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {iTM-Net: Deep Inverse Tone Mapping Using Novel Loss Function Based on Tone Mapping Operator},\n  year = {2019},\n  pages = {1-5},\n  abstract = {A novel inverse tone mapping network, called “iTMNet”, is proposed in this paper. For training iTM-Net, we also propose a novel loss function considering the pixel distribution of HDR images. In inverse tone mapping with CNNs, we first point out that training CNNs with a standard loss function causes a problem, due to the distribution of HDR images. To overcome the problem, the novel loss function non-linearly tone-maps target HDR images into LDR ones, on the basis of a tone mapping operator, and then the distance between the tone-mapped image and a predicted one is calculated. The proposed loss function enables us not only to normalize HDR images but also to distribute pixel values of HDR images, like LDR ones. Experimental results show that HDR images predicted by the proposed iTM-Net have higher-quality than HDR ones predicted by conventional inverse tone mapping methods including state-of the-arts, in terms of both HDR-VDP-2.2 and PU encoding + MSSSIM. In addition, compared with loss functions not considering the HDR pixel distribution, the proposed loss function is shown to improve the performance of CNNs.},\n  keywords = {data compression;display devices;image processing;optimisation;deep inverse tone mapping;tone mapping operator;training iTM-Net;HDR images;standard loss function;loss function nonlinearly tone-maps;tone-mapped image;conventional inverse tone mapping methods;HDR pixel distribution;Training;Cameras;Convolution;Dynamic range;Sensors;Decoding;Europe;Inverse tone mapping;High dynamic range imaging;Loss function;Deep learning;Convolutional neural networks},\n  doi = {10.23919/EUSIPCO.2019.8902744},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533045.pdf},\n}\n\n
\n
\n\n\n
\n A novel inverse tone mapping network, called “iTMNet”, is proposed in this paper. For training iTM-Net, we also propose a novel loss function considering the pixel distribution of HDR images. In inverse tone mapping with CNNs, we first point out that training CNNs with a standard loss function causes a problem, due to the distribution of HDR images. To overcome the problem, the novel loss function non-linearly tone-maps target HDR images into LDR ones, on the basis of a tone mapping operator, and then the distance between the tone-mapped image and a predicted one is calculated. The proposed loss function enables us not only to normalize HDR images but also to distribute pixel values of HDR images, like LDR ones. Experimental results show that HDR images predicted by the proposed iTM-Net have higher-quality than HDR ones predicted by conventional inverse tone mapping methods including state-of the-arts, in terms of both HDR-VDP-2.2 and PU encoding + MSSSIM. In addition, compared with loss functions not considering the HDR pixel distribution, the proposed loss function is shown to improve the performance of CNNs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Feature LMS Algorithm for Bandpass System Models.\n \n \n \n \n\n\n \n Diniz, P. S. R.; Yazdanpanah, H.; and Lima, M. V. S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FeaturePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902747,\n  author = {P. S. R. Diniz and H. Yazdanpanah and M. V. S. Lima},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Feature LMS Algorithm for Bandpass System Models},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Sparse representations of model parameters have been widely studied. In the adaptive filtering literature, most studies address the cases where the sparsity is directly observed, therefore, there is a growing interest in developing strategies to exploit hidden sparsity. Recently, the feature LMS (F-LMS) algorithm was proposed to expose the sparsity of models with low- and high-frequency contents. In this paper, the F-LMS algorithm is extended to expose hidden sparsity related to models with bandpass spectrum, including the cases of narrowband and broader passband sources. Some simulation results show that the proposed approaches lead to F-LMS algorithms with fast convergence, low misadjustment after convergence, and low computational cost.},\n  keywords = {adaptive filters;band-pass filters;least mean squares methods;signal representation;model parameters;adaptive filtering literature;hidden sparsity;feature LMS algorithm;high-frequency contents;F-LMS algorithm;bandpass spectrum;bandpass system models;sparse representations;Signal processing algorithms;Passband;Transfer functions;Filtering theory;Convergence;Sparse matrices;Cutoff frequency;adaptive filtering;LMS algorithm;feature matrix;bandpass system;narrowband system},\n  doi = {10.23919/EUSIPCO.2019.8902747},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533248.pdf},\n}\n\n
\n
\n\n\n
\n Sparse representations of model parameters have been widely studied. In the adaptive filtering literature, most studies address the cases where the sparsity is directly observed, therefore, there is a growing interest in developing strategies to exploit hidden sparsity. Recently, the feature LMS (F-LMS) algorithm was proposed to expose the sparsity of models with low- and high-frequency contents. In this paper, the F-LMS algorithm is extended to expose hidden sparsity related to models with bandpass spectrum, including the cases of narrowband and broader passband sources. Some simulation results show that the proposed approaches lead to F-LMS algorithms with fast convergence, low misadjustment after convergence, and low computational cost.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Weighted Sum Rate Maximization for Hybrid Beamforming Design in Multi-Cell Massive MIMO OFDM Systems.\n \n \n \n \n\n\n \n Thomas, C. K.; and Slock, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"WeightedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902748,\n  author = {C. K. Thomas and D. Slock},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Weighted Sum Rate Maximization for Hybrid Beamforming Design in Multi-Cell Massive MIMO OFDM Systems},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we look at hybrid beamforming (HBF) for the MIMO Interfering Broadcast Channel (IBC), i.e. the Multi-Input Multi-Output (MIMO) Multi-User (MU) Multi-Cell downlink channel, in an orthogonal frequency-division multiplexing (OFDM) system. While most of the existing works on wideband hybrid systems focus on single-user systems and a few on multi-user single-cell systems, we consider HBF design for OFDM systems in the case of multi-cell. We look at the maximization of weighted sum rate (WSR) using minorization and alternating optimization, the main advantage of which compared to the Weighted Sum Mean Squared Error (WSMSE) based methods is it's faster convergence to a local optimum and user streams selection. Through Simulation results, we show that the proposed deterministic annealing based approach for phase shifter constrained analog BF performs significantly better than state of the art Weighted Sum Mean Squared Error (WSMSE) or WSR based solutions in a wideband OFDM setting. We show that the optimal analog BF can be frequency flat and also provide an analysis of the minimum number of RF chains required to obtain fully digital performance.},\n  keywords = {array signal processing;broadcast channels;cellular radio;mean square error methods;MIMO communication;OFDM modulation;optimisation;Weighted Sum rate maximization;hybrid beamforming design;MultiCell massive MIMO OFDM systems;MIMO Interfering Broadcast Channel;MultiInput MultiOutput MultiUser MultiCell downlink channel;orthogonal frequency-division multiplexing system;wideband hybrid systems;single-user systems;multiuser single-cell systems;alternating optimization;Weighted Sum Mean Squared Error based methods;user streams selection;OFDM;Array signal processing;MIMO communication;Optimization;Convergence;Radio frequency;Wideband},\n  doi = {10.23919/EUSIPCO.2019.8902748},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530794.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we look at hybrid beamforming (HBF) for the MIMO Interfering Broadcast Channel (IBC), i.e. the Multi-Input Multi-Output (MIMO) Multi-User (MU) Multi-Cell downlink channel, in an orthogonal frequency-division multiplexing (OFDM) system. While most of the existing works on wideband hybrid systems focus on single-user systems and a few on multi-user single-cell systems, we consider HBF design for OFDM systems in the case of multi-cell. We look at the maximization of weighted sum rate (WSR) using minorization and alternating optimization, the main advantage of which compared to the Weighted Sum Mean Squared Error (WSMSE) based methods is it's faster convergence to a local optimum and user streams selection. Through Simulation results, we show that the proposed deterministic annealing based approach for phase shifter constrained analog BF performs significantly better than state of the art Weighted Sum Mean Squared Error (WSMSE) or WSR based solutions in a wideband OFDM setting. We show that the optimal analog BF can be frequency flat and also provide an analysis of the minimum number of RF chains required to obtain fully digital performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rate Balancing for Multiuser Multicell Downlink MIMO Systems.\n \n \n \n \n\n\n \n Ghamnia, I.; Slock, D.; and Yuan-Wu, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RatePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902749,\n  author = {I. Ghamnia and D. Slock and Y. Yuan-Wu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Rate Balancing for Multiuser Multicell Downlink MIMO Systems},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we consider rate balancing problem for the Multiple-Input Multiple-Output (MIMO) Interfering Broadcast Channel (IBC), i.e. the multiuser multicell downlink (DL). We address the MIMO DL beamformer design and power allocation for maximizing the minimum weighted user rate with sum-power constraint with the weighting reflecting user priorities. The proposed solution is based on reformulating the max-min user rate optimization problem into a weighted Mean Squared Error (MSE) balancing problem. Employing MSE duality between DL channel and its equivalent Uplink (UL) channel, we propose an iterative algorithm to jointly design the transceiver filters and the power allocation. Simulation results verify the computational efficiency of the proposed algorithm and provide appreciable performance improvements as compared to optimizing the conventional unweighted per user MSE.},\n  keywords = {array signal processing;broadcast channels;cellular radio;filtering theory;iterative methods;mean square error methods;MIMO communication;minimax techniques;multi-access systems;radio transceivers;radiofrequency interference;wireless channels;user MSE;multiuser multicell downlink MIMO systems;rate balancing problem;multiple-input multiple-output interfering broadcast channel;MIMO DL beamformer design;power allocation;minimum weighted user rate;sum-power constraint;weighting;user priorities;max-min user rate optimization problem;weighted Mean Squared Error balancing problem;MSE duality;DL channel;equivalent Uplink channel;Uplink;Downlink;Optimization;Integrated circuits;Resource management;MIMO communication;Signal to noise ratio;rate balancing;max-min fairness;MSE duality;tranceiver optimization;multiuser multicell MIMO systems},\n  doi = {10.23919/EUSIPCO.2019.8902749},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533528.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we consider rate balancing problem for the Multiple-Input Multiple-Output (MIMO) Interfering Broadcast Channel (IBC), i.e. the multiuser multicell downlink (DL). We address the MIMO DL beamformer design and power allocation for maximizing the minimum weighted user rate with sum-power constraint with the weighting reflecting user priorities. The proposed solution is based on reformulating the max-min user rate optimization problem into a weighted Mean Squared Error (MSE) balancing problem. Employing MSE duality between DL channel and its equivalent Uplink (UL) channel, we propose an iterative algorithm to jointly design the transceiver filters and the power allocation. Simulation results verify the computational efficiency of the proposed algorithm and provide appreciable performance improvements as compared to optimizing the conventional unweighted per user MSE.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Direct Localization by Partly Calibrated Arrays: A Relaxed Maximum Likelihood Solution.\n \n \n \n \n\n\n \n Adler, A.; and Wax, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DirectPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902750,\n  author = {A. Adler and M. Wax},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Direct Localization by Partly Calibrated Arrays: A Relaxed Maximum Likelihood Solution},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We present a novel relaxed maximum likelihood solution to the problem of direct localization of multiple narrow-band sources by partly calibrated arrays, i.e., arrays composed of fully calibrated subarrays yet lacking inter-array calibration. The proposed solution is based on eliminating analytically all the nuisance parameters in the problem, thus reducing the likelihood function to a maximization problem involving only the location of the sources. The performance of the solution is demonstrated via simulations.},\n  keywords = {array signal processing;calibration;direction-of-arrival estimation;maximum likelihood estimation;relaxed maximum likelihood solution;direct localization;narrow-band sources;partly calibrated arrays;fully calibrated subarrays;inter-array calibration;likelihood function;Matrices;Maximum likelihood estimation;Array signal processing;Principal component analysis;Narrowband;Eigenvalues and eigenfunctions;Europe;Partly calibrated arrays;direct localization;relaxed maximum likelihood;signal subspace},\n  doi = {10.23919/EUSIPCO.2019.8902750},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534150.pdf},\n}\n\n
\n
\n\n\n
\n We present a novel relaxed maximum likelihood solution to the problem of direct localization of multiple narrow-band sources by partly calibrated arrays, i.e., arrays composed of fully calibrated subarrays yet lacking inter-array calibration. The proposed solution is based on eliminating analytically all the nuisance parameters in the problem, thus reducing the likelihood function to a maximization problem involving only the location of the sources. The performance of the solution is demonstrated via simulations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On Adaptive Spectrum Estimation of Multivariate Autoregressive Locally Stationary Processes.\n \n \n \n \n\n\n \n Meller, M.; Niedzwiecki, M.; and Chojnacki, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902751,\n  author = {M. Meller and M. Niedzwiecki and D. Chojnacki},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {On Adaptive Spectrum Estimation of Multivariate Autoregressive Locally Stationary Processes},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Autoregressive modeling is a widespread parametric spectrum estimation method. It is well known that, in the case of stationary processes with unknown order, its accuracy can be improved by averaging models of different complexity using suitably chosen weights. The paper proposes an extension of this technique to the case of multivariate locally stationary processes. The proposed solution is based on local autoregressive modeling, and combines model averaging with estimation bandwidth adaptation. Results of simulations demonstrate that the application of the proposed decision rules allows one to outperform the standard approach, which does not include the bandwidth adaptation.},\n  keywords = {adaptive estimation;autoregressive processes;signal processing;adaptive spectrum estimation;multivariate autoregressive locally stationary processes;widespread parametric spectrum estimation method;multivariate locally stationary processes;local autoregressive modeling;combines model;Adaptation models;Estimation;Bandwidth;Reactive power;Spectral analysis;Computational modeling;Bayes methods;spectral estimation;multivariate autoregressive process;model averaging;final prediction error},\n  doi = {10.23919/EUSIPCO.2019.8902751},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532006.pdf},\n}\n\n
\n
\n\n\n
\n Autoregressive modeling is a widespread parametric spectrum estimation method. It is well known that, in the case of stationary processes with unknown order, its accuracy can be improved by averaging models of different complexity using suitably chosen weights. The paper proposes an extension of this technique to the case of multivariate locally stationary processes. The proposed solution is based on local autoregressive modeling, and combines model averaging with estimation bandwidth adaptation. Results of simulations demonstrate that the application of the proposed decision rules allows one to outperform the standard approach, which does not include the bandwidth adaptation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Maximum likelihood convolutional beamformer for simultaneous denoising and dereverberation.\n \n \n \n \n\n\n \n Nakatani, T.; and Kinoshita, K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"MaximumPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902753,\n  author = {T. Nakatani and K. Kinoshita},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Maximum likelihood convolutional beamformer for simultaneous denoising and dereverberation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This article describes a probabilistic formulation of a Weighted Power minimization Distortionless response convolutional beamformer (WPD). The WPD unifies a weighted prediction error based dereverberation method (WPE) and a minimum power distortionless response beamformer (MPDR) into a single convolutional beamformer, and achieves simultaneous dereverberation and denoising in an optimal way. However, the optimization criterion is obtained simply by combining existing criteria without any clear theoretical justification. This article presents a generative model and a probabilistic formulation of a WPD, and derives an optimization algorithm based on a maximum likelihood estimation. We also describe a method for estimating the steering vector of the desired signal by utilizing WPE within the WPD framework to provide an effective and efficient beamformer for denoising and dereverberation.},\n  keywords = {array signal processing;convolution;maximum likelihood estimation;optimisation;signal denoising;vectors;simultaneous dereverberation;optimization criterion;probabilistic formulation;maximum likelihood estimation;WPD framework;maximum likelihood convolutional beamformer;simultaneous denoising;Weighted Power minimization Distortionless response convolutional beamformer;weighted prediction error based dereverberation method;minimum power distortionless response beamformer;single convolutional beamformer;WPE framework;steering vector;Microphones;Maximum likelihood estimation;Noise reduction;Reverberation;Convolution;Array signal processing;Denoising;dereverberation;microphone array;speech enhancement;maximum likelihood estimation},\n  doi = {10.23919/EUSIPCO.2019.8902753},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531885.pdf},\n}\n\n
\n
\n\n\n
\n This article describes a probabilistic formulation of a Weighted Power minimization Distortionless response convolutional beamformer (WPD). The WPD unifies a weighted prediction error based dereverberation method (WPE) and a minimum power distortionless response beamformer (MPDR) into a single convolutional beamformer, and achieves simultaneous dereverberation and denoising in an optimal way. However, the optimization criterion is obtained simply by combining existing criteria without any clear theoretical justification. This article presents a generative model and a probabilistic formulation of a WPD, and derives an optimization algorithm based on a maximum likelihood estimation. We also describe a method for estimating the steering vector of the desired signal by utilizing WPE within the WPD framework to provide an effective and efficient beamformer for denoising and dereverberation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Full Covariance Fitting DoA Estimation Using Partial Relaxation Framework.\n \n \n \n \n\n\n \n Schenck, D.; Trinh, M.; Mestre, H. X.; Viberg, M.; and Pesavento, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FullPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902758,\n  author = {D. Schenck and M. Trinh and H. X. Mestre and M. Viberg and M. Pesavento},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Full Covariance Fitting DoA Estimation Using Partial Relaxation Framework},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The so-called Partial Relaxation approach has recently been proposed to solve the Direction-of-Arrival estimation problem. In this paper, we extend the previous work by applying Covariance Fitting with a data model that includes the noise covariance. Instead of applying a single source approximation to multi-source estimation criteria, which is the case for MUSIC, the conventional beamformer, or the Capon beamformer, the Partial Relaxation approach accounts for the existence of multiple sources using a non-parametric modification of the signal model. In the Partial Relaxation framework, the structure of the desired direction is kept, whereas the sensor array manifold corresponding to the remaining signals is relaxed [1], [2]. This procedure allows to compute a closed-form solution for the relaxed signal part and to come up with a simple spectral search with a significantly reduced computational complexity. Unlike in the existing Partial Relaxed Covariance Fitting approach, in this paper we utilize more prior-knowledge on the structure of the covariance matrix by also considering the noise covariance. Simulation results show that, the proposed method outperforms the existing Partial Relaxed Covariance Fitting method, especially in difficult conditions with small sample size and low Signal-to-Noise Ratio. Its threshold performance is close to that of Deterministic Maximum Likelihood, but at significantly lower cost.},\n  keywords = {array signal processing;computational complexity;covariance matrices;direction-of-arrival estimation;partial relaxation framework;noise covariance;single source approximation;multisource estimation criteria;Capon beamformer;signal model;covariance matrix;signal-to-noise ratio;full covariance fitting DoA estimation;partial relaxed covariance fitting method;direction-of-arrival estimation problem;sensor array manifold;Covariance matrices;Fitting;Direction-of-arrival estimation;Estimation;Optimization;Eigenvalues and eigenfunctions;Manifolds},\n  doi = {10.23919/EUSIPCO.2019.8902758},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531866.pdf},\n}\n\n
\n
\n\n\n
\n The so-called Partial Relaxation approach has recently been proposed to solve the Direction-of-Arrival estimation problem. In this paper, we extend the previous work by applying Covariance Fitting with a data model that includes the noise covariance. Instead of applying a single source approximation to multi-source estimation criteria, which is the case for MUSIC, the conventional beamformer, or the Capon beamformer, the Partial Relaxation approach accounts for the existence of multiple sources using a non-parametric modification of the signal model. In the Partial Relaxation framework, the structure of the desired direction is kept, whereas the sensor array manifold corresponding to the remaining signals is relaxed [1], [2]. This procedure allows to compute a closed-form solution for the relaxed signal part and to come up with a simple spectral search with a significantly reduced computational complexity. Unlike in the existing Partial Relaxed Covariance Fitting approach, in this paper we utilize more prior-knowledge on the structure of the covariance matrix by also considering the noise covariance. Simulation results show that, the proposed method outperforms the existing Partial Relaxed Covariance Fitting method, especially in difficult conditions with small sample size and low Signal-to-Noise Ratio. Its threshold performance is close to that of Deterministic Maximum Likelihood, but at significantly lower cost.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparing Optimization Methods of Neural Networks for Real-time Inference.\n \n \n \n \n\n\n \n Khan, M.; Lunnikivi, H.; Huttunen, H.; and Boutellier, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ComparingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902760,\n  author = {M. Khan and H. Lunnikivi and H. Huttunen and J. Boutellier},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Comparing Optimization Methods of Neural Networks for Real-time Inference},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper compares three different optimization approaches for accelerating the inference of convolutional neural networks (CNNs). We compare the techniques of separable convolution, weight pruning, and binarization. Each method is implemented and empirically compared in three aspects: preservation of accuracy, storage requirements, and achieved speed-up. Experiments are performed both on a desktop computer and on a mobile platform using a CNN model for vehicle type classification. Our experiments show that the largest speed-up is achieved by binarization, whereas pruning achieves the largest reduction in storage requirements. Both of these approaches largely preserve the accuracy of the original network.},\n  keywords = {convolution;convolutional neural nets;image classification;inference mechanisms;mobile computing;optimisation;traffic engineering computing;optimization methods;real-time inference;optimization;convolutional neural networks;CNNs;separable convolution;weight pruning;binarization;storage requirements;desktop computer;mobile platform;CNN model;vehicle type classification;Convolution;Optimization;Computational modeling;Training;Sparse matrices;Memory management;Europe;convolutional neural networks;model optimization;image classification},\n  doi = {10.23919/EUSIPCO.2019.8902760},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533587.pdf},\n}\n\n
\n
\n\n\n
\n This paper compares three different optimization approaches for accelerating the inference of convolutional neural networks (CNNs). We compare the techniques of separable convolution, weight pruning, and binarization. Each method is implemented and empirically compared in three aspects: preservation of accuracy, storage requirements, and achieved speed-up. Experiments are performed both on a desktop computer and on a mobile platform using a CNN model for vehicle type classification. Our experiments show that the largest speed-up is achieved by binarization, whereas pruning achieves the largest reduction in storage requirements. Both of these approaches largely preserve the accuracy of the original network.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On spectral embeddings for supervised binaural source localization.\n \n \n \n \n\n\n \n Taseska, M.; and v. Waterschoot, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902761,\n  author = {M. Taseska and T. v. Waterschoot},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {On spectral embeddings for supervised binaural source localization},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Advances in data-driven signal processing have resulted in impressively accurate signal and parameter estimation algorithms in many applications. A common element in such algorithms is the replacement of hand-crafted features extracted from the signals, by data-driven representations. In this paper, we discuss low-dimensional representations obtained using spectral methods and their application to binaural sound localization. Our work builds upon recent studies on the low-dimensionality of the binaural cues manifold, which postulate that for a given acoustic environment and microphone setup, the source locations are the primary factors of variability in the measured signals. We provide a study of selected linear and non-linear spectral dimensionality reduction methods and their ability to accurately preserve neighborhoods, as defined by the source locations. The low-dimensional representations are then evaluated in a nearest-neighbor regression framework for localization using a dataset of dummy head recordings.},\n  keywords = {acoustic signal processing;feature extraction;microphones;parameter estimation;regression analysis;spectral embeddings;supervised binaural source localization;data-driven signal processing;parameter estimation algorithms;hand-crafted features;data-driven representations;low-dimensional representations;spectral methods;binaural sound localization;low-dimensionality;binaural cues manifold;microphone setup;acoustic environment;feature extraction;nearest-neighbor regression framework;Microphones;Europe;Signal processing;Manifolds;Acoustics;Position measurement;Dimensionality reduction;binaural source localization;dimensionality reduction;manifold learning},\n  doi = {10.23919/EUSIPCO.2019.8902761},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531317.pdf},\n}\n\n
\n
\n\n\n
\n Advances in data-driven signal processing have resulted in impressively accurate signal and parameter estimation algorithms in many applications. A common element in such algorithms is the replacement of hand-crafted features extracted from the signals, by data-driven representations. In this paper, we discuss low-dimensional representations obtained using spectral methods and their application to binaural sound localization. Our work builds upon recent studies on the low-dimensionality of the binaural cues manifold, which postulate that for a given acoustic environment and microphone setup, the source locations are the primary factors of variability in the measured signals. We provide a study of selected linear and non-linear spectral dimensionality reduction methods and their ability to accurately preserve neighborhoods, as defined by the source locations. The low-dimensional representations are then evaluated in a nearest-neighbor regression framework for localization using a dataset of dummy head recordings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Hybrid FSO/RF-FSO Systems over Generalized Málaga Distributed Channels with Pointing Errors.\n \n \n \n\n\n \n Bag, B.; Das, A.; Bose, C.; and Chandra, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902762,\n  author = {B. Bag and A. Das and C. Bose and A. Chandra},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Hybrid FSO/RF-FSO Systems over Generalized Málaga Distributed Channels with Pointing Errors},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The combined effect of Málaga distributed atmospheric turbulence and pointing errors on the performance of a hybrid free-space optical (FSO)/ radio-frequency (RF)-FSO communication system is presented and analyzed in this paper. We considered three performance metrics, namely, outage probability, average bit error rate (BER) and average capacity. For each of the performance metrics, closed-form expressions are derived in terms of the Meijer's G function, analytical results are validated by Monte Carlo simulations and numerical values of the metrics are plotted for different channel conditions. Compared to a single FSO link or a single RF-FSO link, the proposed adaptive hybrid system achieves reduced outage probability, reduced BER and enhanced channel capacity at the cost of extra hardware.},\n  keywords = {atmospheric turbulence;channel capacity;error statistics;free-space optical communication;Monte Carlo methods;optical links;probability;telecommunication network reliability;wireless channels;adaptive hybrid system;outage probability;enhanced channel capacity;generalized Málaga distributed channels;pointing errors;performance metrics;average capacity;single FSO link;single RF-FSO link;Radio frequency;Signal to noise ratio;Power system reliability;Probability;Measurement;Relays;Data communication;Free-space optics;Málaga distribution;adaptive RF-FSO link;amplify-and-forward relay},\n  doi = {10.23919/EUSIPCO.2019.8902762},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The combined effect of Málaga distributed atmospheric turbulence and pointing errors on the performance of a hybrid free-space optical (FSO)/ radio-frequency (RF)-FSO communication system is presented and analyzed in this paper. We considered three performance metrics, namely, outage probability, average bit error rate (BER) and average capacity. For each of the performance metrics, closed-form expressions are derived in terms of the Meijer's G function, analytical results are validated by Monte Carlo simulations and numerical values of the metrics are plotted for different channel conditions. Compared to a single FSO link or a single RF-FSO link, the proposed adaptive hybrid system achieves reduced outage probability, reduced BER and enhanced channel capacity at the cost of extra hardware.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive Pre-whitening Based on Parametric NMF.\n \n \n \n \n\n\n \n Jaramillo, A. E.; Nielsen, J. K.; and Christensen, M. G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902763,\n  author = {A. E. Jaramillo and J. K. Nielsen and M. G. Christensen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive Pre-whitening Based on Parametric NMF},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Several speech processing methods assume that a clean signal is observed in white Gaussian noise (WGN). An argument against those methods is that the WGN assumption is not valid in many real acoustic scenarios. To take into account the coloured nature of the noise, a pre-whitening filter which renders the background noise closer to white can be applied. This paper introduces an adaptive pre-whitener based on a supervised non-negative matrix factorization (NMF), in which a pre-trained dictionary includes parametrized spectral information about the noise and speech sources in the form of autoregressive (AR) coefficients. Results show that the noise can get closer to white, in comparison to pre-whiteners based on conventional noise power spectral density (PSD) estimates such as minimum statistics and MMSE. A better pitch estimation accuracy can be achieved as well. Speech enhancement based on the WGN assumption shows a similar performance to the conventional enhancement which makes use of the background noise PSD estimate, which reveals that the proposed pre-whitener can preserve the signal of interest.},\n  keywords = {filtering theory;Gaussian noise;least mean squares methods;matrix decomposition;spectral analysis;speech enhancement;speech enhancement;WGN assumption;background noise PSD estimate;adaptive pre-whitening;parametric NMF;clean signal;white Gaussian noise;acoustic scenarios;pre-whitening filter;adaptive pre-whitener;nonnegative matrix factorization;pre-trained dictionary;speech sources;conventional noise power spectral density;Estimation;Noise measurement;Databases;Speech enhancement;Colored noise;Training;pre-whitening;NMF;spectral flatness;pitch estimation;speech enhancement},\n  doi = {10.23919/EUSIPCO.2019.8902763},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528043.pdf},\n}\n\n
\n
\n\n\n
\n Several speech processing methods assume that a clean signal is observed in white Gaussian noise (WGN). An argument against those methods is that the WGN assumption is not valid in many real acoustic scenarios. To take into account the coloured nature of the noise, a pre-whitening filter which renders the background noise closer to white can be applied. This paper introduces an adaptive pre-whitener based on a supervised non-negative matrix factorization (NMF), in which a pre-trained dictionary includes parametrized spectral information about the noise and speech sources in the form of autoregressive (AR) coefficients. Results show that the noise can get closer to white, in comparison to pre-whiteners based on conventional noise power spectral density (PSD) estimates such as minimum statistics and MMSE. A better pitch estimation accuracy can be achieved as well. Speech enhancement based on the WGN assumption shows a similar performance to the conventional enhancement which makes use of the background noise PSD estimate, which reveals that the proposed pre-whitener can preserve the signal of interest.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Improved Feature Extraction Method for Texture Classification with Increased Noise Robustness.\n \n \n \n \n\n\n \n Barburiceanu, S. R.; Meza, S.; Germain, C.; and Terebes, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902765,\n  author = {S. R. Barburiceanu and S. Meza and C. Germain and R. Terebes},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {An Improved Feature Extraction Method for Texture Classification with Increased Noise Robustness},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper presents an improved feature extraction method based on the use of state-of-the art filtering techniques and Local Binary Patterns-derived feature descriptors with applications in texture classification. The method is adaptive, being capable to determine the type of noise present in the input image and to apply the appropriate operator for the filtering step of the feature extraction technique. The improved approaches, labelled BM3DELBP (Block Matching and 3D filtering Extended Local Binary Pattern) and SARBM3DELBP (Synthetic Aperture Radar Block Matching and 3D filtering Extended Local Binary Pattern) bring significant improvements both in terms of robustness to Gaussian and speckle noise and in terms of classification accuracy, being invariant to different image transformations. We tested our approach both on synthetic textures from two standard Outex databases and on real polarimetric Synthetic Aperture Radar (SAR) images of pine forests. On all considered databases, the proposed approach proved to be above state-of-the-art LBP variants in terms of classification accuracy, even in the presence of high Gaussian and speckle noise levels.},\n  keywords = {feature extraction;filtering theory;image classification;image denoising;image matching;image representation;image texture;radar computing;radar imaging;speckle;synthetic aperture radar;improved feature extraction method;texture classification;increased noise robustness;state-of-the art filtering techniques;Local Binary Patterns-derived feature descriptors;filtering step;feature extraction technique;improved approaches;labelled BM3DELBP;Extended Local Binary Pattern;SARBM3DELBP;Synthetic Aperture Radar Block Matching;classification accuracy;synthetic textures;polarimetric Synthetic Aperture Radar images;speckle noise levels;Feature extraction;Filtering;Three-dimensional displays;Histograms;Speckle;Training;Lighting;texture classification;Local Binary Patterns;Block Matching;3D filtering;noise robustness;feature extraction;SAR images;speckle noise;Gaussian noise},\n  doi = {10.23919/EUSIPCO.2019.8902765},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529541.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents an improved feature extraction method based on the use of state-of-the art filtering techniques and Local Binary Patterns-derived feature descriptors with applications in texture classification. The method is adaptive, being capable to determine the type of noise present in the input image and to apply the appropriate operator for the filtering step of the feature extraction technique. The improved approaches, labelled BM3DELBP (Block Matching and 3D filtering Extended Local Binary Pattern) and SARBM3DELBP (Synthetic Aperture Radar Block Matching and 3D filtering Extended Local Binary Pattern) bring significant improvements both in terms of robustness to Gaussian and speckle noise and in terms of classification accuracy, being invariant to different image transformations. We tested our approach both on synthetic textures from two standard Outex databases and on real polarimetric Synthetic Aperture Radar (SAR) images of pine forests. On all considered databases, the proposed approach proved to be above state-of-the-art LBP variants in terms of classification accuracy, even in the presence of high Gaussian and speckle noise levels.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modified U-Net for Automatic Brain Tumor Regions Segmentation.\n \n \n \n \n\n\n \n Kaewrak, K.; Soraghan, J.; Caterina, G. D.; and Grose, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ModifiedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902767,\n  author = {K. Kaewrak and J. Soraghan and G. D. Caterina and D. Grose},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Modified U-Net for Automatic Brain Tumor Regions Segmentation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Novel deep learning based network architectures are investigated for advanced brain tumor image classification and segmentation. Variations in brain tumor characteristics together with limited labelled datasets represent significant challenges in automatic brain tumor segmentation. In this paper, we present a novel architecture based on the U-Net that incorporates both global and local feature extraction paths to improve the segmentation accuracy. The results included in the paper show superior performance of the novel segmentation for five tumor regions on the large BRATs 2018 dataset over other approaches.},\n  keywords = {biomedical MRI;brain;feature extraction;image classification;image segmentation;learning (artificial intelligence);medical image processing;tumours;automatic brain tumor image classification;deep learning based network architectures;automatic brain tumor region segmentation;global feature extraction paths;modified U-net;segmentation accuracy;local feature extraction paths;Tumors;Image segmentation;Magnetic resonance imaging;Convolution;Feature extraction;Training;Image resolution;segmentation;tumor;u-net},\n  doi = {10.23919/EUSIPCO.2019.8902767},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533944.pdf},\n}\n\n
\n
\n\n\n
\n Novel deep learning based network architectures are investigated for advanced brain tumor image classification and segmentation. Variations in brain tumor characteristics together with limited labelled datasets represent significant challenges in automatic brain tumor segmentation. In this paper, we present a novel architecture based on the U-Net that incorporates both global and local feature extraction paths to improve the segmentation accuracy. The results included in the paper show superior performance of the novel segmentation for five tumor regions on the large BRATs 2018 dataset over other approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Channel Hardening, Favorable Equalization and Propagation in Wideband Massive MIMO.\n \n \n \n \n\n\n \n Dardari, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ChannelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902768,\n  author = {D. Dardari},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Channel Hardening, Favorable Equalization and Propagation in Wideband Massive MIMO},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper analyzes the channel hardening and favorable propagation behavior of frequency-selective massive MIMO channels. To this purpose the concept of favorable equalization is introduced to characterize the property of the channel to become frequency flat as the number of antennas grows when proper pre-filtering is adopted. It is shown that classic OFDM-based massive MIMO and time-reversal schemes, usually considered and analyzed as different technologies, are particular cases of the same framework. Their generalization leads to the concept of massive waveforming, which allows the creation of parallel wideband AWGN-like links between the base station and the users.},\n  keywords = {antenna arrays;broadband networks;equalisers;MIMO communication;OFDM modulation;wireless channels;channel hardening;favorable equalization;wideband massive;favorable propagation behavior;frequency-selective massive MIMO channels;classic OFDM-based massive MIMO;time-reversal schemes;massive waveforming;Massive MIMO;OFDM;Wideband;Antennas;Reactive power;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2019.8902768},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528216.pdf},\n}\n\n
\n
\n\n\n
\n This paper analyzes the channel hardening and favorable propagation behavior of frequency-selective massive MIMO channels. To this purpose the concept of favorable equalization is introduced to characterize the property of the channel to become frequency flat as the number of antennas grows when proper pre-filtering is adopted. It is shown that classic OFDM-based massive MIMO and time-reversal schemes, usually considered and analyzed as different technologies, are particular cases of the same framework. Their generalization leads to the concept of massive waveforming, which allows the creation of parallel wideband AWGN-like links between the base station and the users.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Tensor-Train Modeling for Mimo-OFDM Tensor Coding-and-Forwarding Relay Systems.\n \n \n \n \n\n\n \n Zniyed, Y.; Boyer, R.; de Almeida , A. L. F.; and Favier, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Tensor-TrainPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902770,\n  author = {Y. Zniyed and R. Boyer and A. L. F. {de Almeida} and G. Favier},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Tensor-Train Modeling for Mimo-OFDM Tensor Coding-and-Forwarding Relay Systems},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we consider a new one-way two-hop amplify-and-forward (AF) relaying scheme with a tensor space-time coding under frequency-selective fading channels. The signals received at the destination of the multi-input multi-output (MIMO) system define a 6-order tensor which satisfies a tensor-train decomposition (TTD). We propose a new TTD based receiver for a joint channel and symbol estimation. The proposed receiver avoids the use of long training sequences and resort to very few pilots to provide unique estimates of the individual channel matrices and the symbol matrix. Numerical simulations show the performance of the new proposed TTD-based semi-blind receiver.},\n  keywords = {amplify and forward communication;channel estimation;fading channels;matrix algebra;MIMO communication;OFDM modulation;radio receivers;relay networks (telecommunication);space-time codes;tensors;tensor-train modeling;tensor space-time coding;frequency-selective fading channels;multiinput multioutput system;6-order tensor;tensor-train decomposition;TTD based receiver;long training sequences;individual channel matrices;TTD-based semiblind receiver;MIMO-OFDM tensor coding-and-forwarding relay systems;one-way two-hop amplify-and-forward relaying scheme;joint channel and symbol estimation;Tensors;Relays;Encoding;MIMO communication;Channel estimation;Estimation;Receivers;Channel estimation;MIMO relay systems;tensor coding;tensor-train decomposition;semi-blind receiver},\n  doi = {10.23919/EUSIPCO.2019.8902770},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533793.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we consider a new one-way two-hop amplify-and-forward (AF) relaying scheme with a tensor space-time coding under frequency-selective fading channels. The signals received at the destination of the multi-input multi-output (MIMO) system define a 6-order tensor which satisfies a tensor-train decomposition (TTD). We propose a new TTD based receiver for a joint channel and symbol estimation. The proposed receiver avoids the use of long training sequences and resort to very few pilots to provide unique estimates of the individual channel matrices and the symbol matrix. Numerical simulations show the performance of the new proposed TTD-based semi-blind receiver.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Multitarget Tracking Method for Estimating Carotid Artery Wall Motion from Ultrasound Sequences.\n \n \n \n \n\n\n \n Dorazil, J.; Repp, R.; Kropfreiter, T.; Prüller, R.; Říha, K.; and Hlawatsch, F.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902772,\n  author = {J. Dorazil and R. Repp and T. Kropfreiter and R. Prüller and K. Říha and F. Hlawatsch},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Multitarget Tracking Method for Estimating Carotid Artery Wall Motion from Ultrasound Sequences},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Analyzing the motion of the wall of the common carotid artery (CCA) yields effective indicators for atherosclerosis. In this work, we explore the use of multitarget tracking techniques for estimating the time-varying CCA radius from an ultrasound video sequence. We employ the joint integrated probabilistic data association (JIPDA) filter to track a set of “feature points” (FPs) located around the CCA wall cross section. Subsequently, we estimate the time-varying CCA radius via a non-linear least-squares method and a Kalman filter. The application of the JIPDA filter is enabled by a linearized state-space model describing the quasi-periodic movement of the FPs and the measurement extraction process. Simulation results using the Field II ultrasound simulation program show that the proposed multitarget tracking method can outperform a state-of-the-art method.},\n  keywords = {biomechanics;biomedical ultrasonics;blood vessels;cardiovascular system;filtering theory;Kalman filters;medical image processing;sensor fusion;target tracking;field II ultrasound simulation program;multitarget tracking techniques;effective indicators;common carotid artery;ultrasound sequences;carotid artery wall motion;multitarget tracking method;linearized state-space model;JIPDA filter;Kalman filter;least-squares method;time-varying CCA radius;CCA wall cross section;FPs;joint integrated probabilistic data association filter;ultrasound video sequence;Radar tracking;Tracking;Speckle;Ultrasonic imaging;Carotid arteries;Time measurement;Clutter;Atherosclerosis;common carotid artery;ultrasound video processing;speckle tracking;multitarget tracking;joint integrated probabilistic data association (JIPDA) filter},\n  doi = {10.23919/EUSIPCO.2019.8902772},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533896.pdf},\n}\n\n
\n
\n\n\n
\n Analyzing the motion of the wall of the common carotid artery (CCA) yields effective indicators for atherosclerosis. In this work, we explore the use of multitarget tracking techniques for estimating the time-varying CCA radius from an ultrasound video sequence. We employ the joint integrated probabilistic data association (JIPDA) filter to track a set of “feature points” (FPs) located around the CCA wall cross section. Subsequently, we estimate the time-varying CCA radius via a non-linear least-squares method and a Kalman filter. The application of the JIPDA filter is enabled by a linearized state-space model describing the quasi-periodic movement of the FPs and the measurement extraction process. Simulation results using the Field II ultrasound simulation program show that the proposed multitarget tracking method can outperform a state-of-the-art method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On Detecting and Handling target occlusions in Correlation-filter-based 2D tracking.\n \n \n \n \n\n\n \n Karakostas, I.; Mygdalis, V.; Tefas, A.; and Pitas, I.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902773,\n  author = {I. Karakostas and V. Mygdalis and A. Tefas and I. Pitas},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {On Detecting and Handling target occlusions in Correlation-filter-based 2D tracking},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper focuses on the application of 2D visual object tracking in Unmanned Aerial Vehicles (UAV) for the coverage of live outdoor events, by filming moving targets (e.g., athletes, boats, cars etc.). In this application scenario, a 2D target tracker visually assists the UAV pilot (or cameraman) to maintain proper target framing, or it is employed for autonomous UAV operation. It should be expected that in such scenarios, the 2D tracker may fail due to target occlusions, illumination variations, fast 3D target motion, etc., thus, the 2D tracker should be able to recover from such situations. The proposed long-term 2D tracking algorithm solves exactly this problem, by detecting occlusions from the 2D tracker responses. Moreover, according to the immensity of the occlusion, the tracker may stop updating the tracker model or try to re-detect the target in a broader frame region. Experimental results indicate that our proposed tracking algorithm outperforms state-of-the art correlation filter trackers in UAV orientated visual tracking benchmarks, as well as in realistic UAV cinematography applications.},\n  keywords = {autonomous aerial vehicles;cinematography;filtering theory;object detection;object tracking;target tracking;target occlusions;correlation-filter-based 2D tracking;2D visual object tracking;unmanned aerial vehicles;autonomous UAV operation;long-term 2D;UAV cinematography applications;UAV pilot;Target tracking;Two dimensional displays;Visualization;Unmanned aerial vehicles;Signal processing algorithms;Support vector machines;2D visual object tracking;Occlusion-detection;Fast motion change.},\n  doi = {10.23919/EUSIPCO.2019.8902773},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570527762.pdf},\n}\n\n
\n
\n\n\n
\n This paper focuses on the application of 2D visual object tracking in Unmanned Aerial Vehicles (UAV) for the coverage of live outdoor events, by filming moving targets (e.g., athletes, boats, cars etc.). In this application scenario, a 2D target tracker visually assists the UAV pilot (or cameraman) to maintain proper target framing, or it is employed for autonomous UAV operation. It should be expected that in such scenarios, the 2D tracker may fail due to target occlusions, illumination variations, fast 3D target motion, etc., thus, the 2D tracker should be able to recover from such situations. The proposed long-term 2D tracking algorithm solves exactly this problem, by detecting occlusions from the 2D tracker responses. Moreover, according to the immensity of the occlusion, the tracker may stop updating the tracker model or try to re-detect the target in a broader frame region. Experimental results indicate that our proposed tracking algorithm outperforms state-of-the art correlation filter trackers in UAV orientated visual tracking benchmarks, as well as in realistic UAV cinematography applications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian Estimation of Recurrent Changepoints for Signal Segmentation and Anomaly Detection.\n \n \n \n \n\n\n \n Reich, C.; Nicolaou, C.; Mansour, A.; and Laerhoven, K. V.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902774,\n  author = {C. Reich and C. Nicolaou and A. Mansour and K. V. Laerhoven},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian Estimation of Recurrent Changepoints for Signal Segmentation and Anomaly Detection},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Signal segmentation is a generic task in many time series applications. We propose approaching it via Bayesian changepoint algorithms, i.e., by assigning segments between changepoints. When successive signals show a recurrent change-point pattern, estimating changepoint recurrence is beneficial for two reasons: While recurrent changepoints yield more robust signal segment estimates, non-recurrent changepoints bear valuable information for unsupervised anomaly detection. This study introduces the changepoint recurrence distribution (CPRD) as an empirical estimate of the recurrent behavior of observed changepoints. Two generic methods for incorporating the estimated CPRD into the process of assessing recurrence of future changepoints are suggested. The knowledge of non-recurrent changepoints arising from one of these methods allows additional unsupervised anomaly detection. The quality both of changepoint recurrence estimation via CPRD and of changepoint-related signal segmentation and un-supervised anomaly detection are verified in a proof-of-concept study for two exemplary machine tool monitoring tasks.},\n  keywords = {Bayes methods;signal processing;time series;changepoint-related signal segmentation;unsupervised anomaly detection;Bayesian estimation;recurrent changepoints;time series applications;Bayesian changepoint algorithms;recurrent change-point pattern;changepoint recurrence distribution;recurrent behavior;estimated CPRD;changepoint recurrence estimation;time series;Hazards;Anomaly detection;Estimation;Feature extraction;Hidden Markov models;Machine tools;Bayes methods;Bayesian methods;online learning;signal segmentation;anomaly detection},\n  doi = {10.23919/EUSIPCO.2019.8902774},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533286.pdf},\n}\n\n
\n
\n\n\n
\n Signal segmentation is a generic task in many time series applications. We propose approaching it via Bayesian changepoint algorithms, i.e., by assigning segments between changepoints. When successive signals show a recurrent change-point pattern, estimating changepoint recurrence is beneficial for two reasons: While recurrent changepoints yield more robust signal segment estimates, non-recurrent changepoints bear valuable information for unsupervised anomaly detection. This study introduces the changepoint recurrence distribution (CPRD) as an empirical estimate of the recurrent behavior of observed changepoints. Two generic methods for incorporating the estimated CPRD into the process of assessing recurrence of future changepoints are suggested. The knowledge of non-recurrent changepoints arising from one of these methods allows additional unsupervised anomaly detection. The quality both of changepoint recurrence estimation via CPRD and of changepoint-related signal segmentation and un-supervised anomaly detection are verified in a proof-of-concept study for two exemplary machine tool monitoring tasks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Resource Optimization for Cognitive Satellite Systems with Incumbent Terrestrial Receivers.\n \n \n \n \n\n\n \n Louchart, A.; Ciblat, P.; and d. Kerret, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ResourcePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902775,\n  author = {A. Louchart and P. Ciblat and P. d. Kerret},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Resource Optimization for Cognitive Satellite Systems with Incumbent Terrestrial Receivers},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We address the resource optimization issue for communications from terrestrial users to a multi-beam satellite when the bandwidth is shared with incumbent primary terrestrial systems. As a consequence, the satellite system is limited by interference temperature in order not to disturb the incumbent systems. Compared to the state of the art, we propose a relevant way to manage the interference constraints on the incumbent systems. Simulations exhibit a substantial gain in data rate when the number of incumbent systems grows.},\n  keywords = {cognitive radio;radio spectrum management;radiofrequency interference;satellite communication;cognitive satellite systems;incumbent terrestrial receivers;resource optimization issue;terrestrial users;multibeam satellite;incumbent primary terrestrial systems;satellite system;incumbent systems;Satellite broadcasting;Receivers;Interference;Optimization;Resource management;Satellites;Decoding},\n  doi = {10.23919/EUSIPCO.2019.8902775},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530778.pdf},\n}\n\n
\n
\n\n\n
\n We address the resource optimization issue for communications from terrestrial users to a multi-beam satellite when the bandwidth is shared with incumbent primary terrestrial systems. As a consequence, the satellite system is limited by interference temperature in order not to disturb the incumbent systems. Compared to the state of the art, we propose a relevant way to manage the interference constraints on the incumbent systems. Simulations exhibit a substantial gain in data rate when the number of incumbent systems grows.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Quality of Recognition Case Study: Texture-based Segmentation and MRI Quality Assessment.\n \n \n \n \n\n\n \n Rodrigues, R.; and Pinheiro, A. M. G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902776,\n  author = {R. Rodrigues and A. M. G. Pinheiro},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Quality of Recognition Case Study: Texture-based Segmentation and MRI Quality Assessment},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Muscle texture may be used as a descriptive feature for the segmentation of skeletal muscle in Magnetic Resonance Images (MRI). However, MRI acquisition is not always ideal and the texture richness might become compromised. Moreover, the research for the development of texture quality metrics, and particularly no-reference metrics, to be applied to the specific context of MRI is still in a very early stage. In this paper, a case study is established from a texture-based segmentation approach for skeletal muscle, which was tested in a thigh Dixon MRI database. Upon the obtained performance measures, the relation between objective image quality and the texture MRI richness is explored, considering a set of state-of-the-art no-reference image quality metrics. A discussion on the effectiveness of existing quality assessment methods in measuring MRI texture quality is carried out, based on Pearson and Spearman correlation outcomes.},\n  keywords = {biomedical MRI;image segmentation;image texture;medical image processing;muscle;state-of-the-art no-reference image quality metrics;quality assessment methods;MRI texture quality;texture MRI richness;objective image quality;thigh Dixon MRI database;texture-based segmentation approach;no-reference metrics;texture quality metrics;texture richness;MRI acquisition;magnetic resonance images;skeletal muscle;descriptive feature;muscle texture;MRI quality assessment;recognition case study;Magnetic Resonance Imaging;Objective Quality Assessment;Quality of Recognition (QoR);MRI Segmentation},\n  doi = {10.23919/EUSIPCO.2019.8902776},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534133.pdf},\n}\n\n
\n
\n\n\n
\n Muscle texture may be used as a descriptive feature for the segmentation of skeletal muscle in Magnetic Resonance Images (MRI). However, MRI acquisition is not always ideal and the texture richness might become compromised. Moreover, the research for the development of texture quality metrics, and particularly no-reference metrics, to be applied to the specific context of MRI is still in a very early stage. In this paper, a case study is established from a texture-based segmentation approach for skeletal muscle, which was tested in a thigh Dixon MRI database. Upon the obtained performance measures, the relation between objective image quality and the texture MRI richness is explored, considering a set of state-of-the-art no-reference image quality metrics. A discussion on the effectiveness of existing quality assessment methods in measuring MRI texture quality is carried out, based on Pearson and Spearman correlation outcomes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n End-to-End Language Identification Using a Residual Convolutional Neural Network with Attentive Temporal Pooling.\n \n \n \n \n\n\n \n Monteiro, J.; Alam, J.; Bhattacharya, G.; and Falk, T. H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"End-to-EndPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902777,\n  author = {J. Monteiro and J. Alam and G. Bhattacharya and T. H. Falk},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {End-to-End Language Identification Using a Residual Convolutional Neural Network with Attentive Temporal Pooling},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this work, we tackle the problem of end-to-end language identification from speech. To this end, we propose the use of a residual convolutional neural network aiming at exploiting the ability of such architectures to take into account large contextual segments of input data. Moreover, in order for variable input lengths to be supported by the proposed setting, a self-attention mechanism is employed on top of the final convolutional layer. This results in a learnable temporal feature pooling scheme that allows for embedding varying duration utterances into a fixed dimension space. Evaluation is performed on data containing ten oriental languages under different test conditions, namely: short-duration recordings, confusing languages trials, as well as a set of trials in which non-target unseen languages are included. End-to-end evaluation of the proposed framework is thus shown to significantly outperform well-known benchmark methods under considered evaluation conditions.},\n  keywords = {convolutional neural nets;natural language processing;speech processing;attentive temporal pooling;end-to-end language identification;residual convolutional neural network;learnable temporal feature pooling scheme;duration utterance;Training;Convolution;Convolutional neural networks;Computer architecture;Benchmark testing;Computational modeling;Language identification;Residual convolutional neural networks;Attentive features pooling},\n  doi = {10.23919/EUSIPCO.2019.8902777},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534005.pdf},\n}\n\n
\n
\n\n\n
\n In this work, we tackle the problem of end-to-end language identification from speech. To this end, we propose the use of a residual convolutional neural network aiming at exploiting the ability of such architectures to take into account large contextual segments of input data. Moreover, in order for variable input lengths to be supported by the proposed setting, a self-attention mechanism is employed on top of the final convolutional layer. This results in a learnable temporal feature pooling scheme that allows for embedding varying duration utterances into a fixed dimension space. Evaluation is performed on data containing ten oriental languages under different test conditions, namely: short-duration recordings, confusing languages trials, as well as a set of trials in which non-target unseen languages are included. End-to-end evaluation of the proposed framework is thus shown to significantly outperform well-known benchmark methods under considered evaluation conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Graph Signal Processing Framework for Atrial Activity Extraction.\n \n \n \n \n\n\n \n Sun, M.; Isufi, E.; de Groot , N. M. S.; and Hendriks, R. C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902778,\n  author = {M. Sun and E. Isufi and N. M. S. {de Groot} and R. C. Hendriks},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Graph Signal Processing Framework for Atrial Activity Extraction},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Atrial fibrillation (AF) is a common cardiac arrhythmia and its mechanisms are not yet fully understood. Analyzing atrial epicardial electrograms (EGMs) is important to understand the mechanisms underlying AF. However, when measuring the atrial activity (AA), the electrogram is commonly distorted by the far-field ventricular activity (VA). During sinus rhythm, the AA and the VA are separated in time. However, the VA often overlaps with the AA in both time and frequency domain during AF, complicating proper analysis of the AA. Unlike traditional methods, this work explores graph signal processing (GSP) tools for AA extraction in EGMs. Since EGMs are time-varying and non-stationary, we put forward the joint graph and short-time Fourier transform to analyze the graph signal along both time and vertices. It is found that the temporal frequency components of the AA and the VA exhibit different levels of spatial variation over the graph in the joint domain. Subsequently, we exploit these findings to propose a novel algorithm for extracting the AA based on graph smoothness. Experimental results on synthetic and real data show that the smoothness analysis of the EGMs over the atrial area enables us to better extract the AA.},\n  keywords = {bioelectric phenomena;electrocardiography;Fourier transforms;graph theory;medical signal detection;medical signal processing;atrial activity extraction;graph signal processing framework;atrial area;graph smoothness;joint graph;AA extraction;graph signal processing tools;far-field ventricular activity;EGMs;atrial epicardial electrograms;common cardiac arrhythmia;atrial fibrillation;Atrial fibrillation;atrial activity extraction;graph-time signal processing;graph smoothness},\n  doi = {10.23919/EUSIPCO.2019.8902778},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533363.pdf},\n}\n\n
\n
\n\n\n
\n Atrial fibrillation (AF) is a common cardiac arrhythmia and its mechanisms are not yet fully understood. Analyzing atrial epicardial electrograms (EGMs) is important to understand the mechanisms underlying AF. However, when measuring the atrial activity (AA), the electrogram is commonly distorted by the far-field ventricular activity (VA). During sinus rhythm, the AA and the VA are separated in time. However, the VA often overlaps with the AA in both time and frequency domain during AF, complicating proper analysis of the AA. Unlike traditional methods, this work explores graph signal processing (GSP) tools for AA extraction in EGMs. Since EGMs are time-varying and non-stationary, we put forward the joint graph and short-time Fourier transform to analyze the graph signal along both time and vertices. It is found that the temporal frequency components of the AA and the VA exhibit different levels of spatial variation over the graph in the joint domain. Subsequently, we exploit these findings to propose a novel algorithm for extracting the AA based on graph smoothness. Experimental results on synthetic and real data show that the smoothness analysis of the EGMs over the atrial area enables us to better extract the AA.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Underwater Acoustic Channel Estimation and Equalization via Adaptive Filtering and Sparse Approximation.\n \n \n \n \n\n\n \n Crombez, S.; Petraglia, M. R.; and Petraglia, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"UnderwaterPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902780,\n  author = {S. Crombez and M. R. Petraglia and A. Petraglia},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Underwater Acoustic Channel Estimation and Equalization via Adaptive Filtering and Sparse Approximation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper presents a method for the identification and equalization of an underwater acoustic (UWA) channel, which is modeled as a Multi-Scale Multi-Lag (MSML) channel. The proposed approach consists of identifying the parameters of the different paths which form the UWA model using a bank of adaptive subfilters, which are applied to scaled versions of the transmitted signal and updated by considering the channel sparseness property. We first verify the accuracy of the identification procedure and then advance to a channel equalization stage using the parameters obtained during the identification process. The equalization performance is evaluated for different signal-to-noise ratios.},\n  keywords = {adaptive filters;channel estimation;equalisers;underwater acoustic communication;scaled versions;channel sparseness property;identification procedure;channel equalization stage;identification process;equalization performance;adaptive filtering;sparse approximation;underwater acoustic channel estimation;MultiScale MultiLag channel;UWA model;adaptive subfilters;Channel estimation;Receivers;OFDM;Signal processing algorithms;Signal processing;Delays;Channel models;Underwater acoustic channel modeling;wireless transmission;adaptive filtering;sparse systems},\n  doi = {10.23919/EUSIPCO.2019.8902780},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533695.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a method for the identification and equalization of an underwater acoustic (UWA) channel, which is modeled as a Multi-Scale Multi-Lag (MSML) channel. The proposed approach consists of identifying the parameters of the different paths which form the UWA model using a bank of adaptive subfilters, which are applied to scaled versions of the transmitted signal and updated by considering the channel sparseness property. We first verify the accuracy of the identification procedure and then advance to a channel equalization stage using the parameters obtained during the identification process. The equalization performance is evaluated for different signal-to-noise ratios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detecting the Rank of a Symmetric Tensor.\n \n \n \n \n\n\n \n Marmin, A.; Castella, M.; and Pesquet, J. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DetectingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902781,\n  author = {A. Marmin and M. Castella and J. -C. Pesquet},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Detecting the Rank of a Symmetric Tensor},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper deals with the problem of Canonical Polyadic (CP) decomposition of a given tensor. Standard algorithms to perform this decomposition generally require the knowledge of the rank of the sought tensor decomposition. Yet, determining the rank of a given tensor is generally hard. In this paper, we propose a method to find the rank of a symmetric tensor. We reformulate the CP decomposition problem into a truncated moment problem and we derive a sufficient condition to certify the rank of the tensor from the rank of some moment matrices associated with it. For tensors with rank not exceeding a prescribed value, this sufficient condition is also necessary. Finally, we propose to combine our rank detection procedure with existing algorithms. Experimental results show the validity of our results and provide an illustration of its practical use. Our method provides the correct rank even in the presence a moderate level of noise.},\n  keywords = {matrix algebra;tensors;CP decomposition problem;truncated moment problem;rank detection procedure;symmetric tensor;canonical polyadic decomposition;tensor decomposition;Tensors;Symmetric matrices;Matrix decomposition;Indexes;Europe;Signal processing;Tools},\n  doi = {10.23919/EUSIPCO.2019.8902781},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532597.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with the problem of Canonical Polyadic (CP) decomposition of a given tensor. Standard algorithms to perform this decomposition generally require the knowledge of the rank of the sought tensor decomposition. Yet, determining the rank of a given tensor is generally hard. In this paper, we propose a method to find the rank of a symmetric tensor. We reformulate the CP decomposition problem into a truncated moment problem and we derive a sufficient condition to certify the rank of the tensor from the rank of some moment matrices associated with it. For tensors with rank not exceeding a prescribed value, this sufficient condition is also necessary. Finally, we propose to combine our rank detection procedure with existing algorithms. Experimental results show the validity of our results and provide an illustration of its practical use. Our method provides the correct rank even in the presence a moderate level of noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reduced-complexity downlink cell-free mmWave Massive MIMO systems with fronthaul constraints.\n \n \n \n \n\n\n \n Femenias, G.; and Riera-Palou, F.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Reduced-complexityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902784,\n  author = {G. Femenias and F. Riera-Palou},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Reduced-complexity downlink cell-free mmWave Massive MIMO systems with fronthaul constraints},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Cell-free architectures have recently emerged as a promising architecture with the potential to offer equal user-rates throughout the coverage area. Given the spectral congestion at sub-6 GHz bands, there is a pressing interest in evaluating the cell-free performance in the mmWave regime. This paper addresses the design and performance evaluation of the downlink segment of cell-free mmWave Massive MIMO system using hybrid precoders under the realistic assumption of capacity-constrained fronthaul links. Towards this end, a hybrid digital-analog beamforming is proposed where the high-dimensional analog part only depends on second-order large scale information. The low-dimensional digital part can then be implemented using standard precoding techniques that rely on instantaneous CSI. Numerical results demonstrate that this reduced-complexity architecture, when combined with an adequate user selection (scheduling), attains excellent Max-Min performance when operating under limited-fronthaul constraints.},\n  keywords = {array signal processing;millimetre wave communication;MIMO communication;minimax techniques;precoding;radio links;radio transceivers;wireless channels;reduced-complexity downlink cell-free;cell-free architectures;equal user-rates;coverage area;spectral congestion;sub-6 GHz bands;pressing interest;cell-free performance;mmWave regime;performance evaluation;downlink segment;cell-free mmWave Massive MIMO system;hybrid precoders;capacity-constrained fronthaul links;digital-analog beamforming;high-dimensional analog part;low-dimensional digital part;reduced-complexity architecture;excellent Max-Min performance;limited-fronthaul constraints;frequency 6.0 GHz;Radio frequency;Massive MIMO;Channel estimation;Antenna arrays;Training;Baseband;Downlink},\n  doi = {10.23919/EUSIPCO.2019.8902784},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530016.pdf},\n}\n\n
\n
\n\n\n
\n Cell-free architectures have recently emerged as a promising architecture with the potential to offer equal user-rates throughout the coverage area. Given the spectral congestion at sub-6 GHz bands, there is a pressing interest in evaluating the cell-free performance in the mmWave regime. This paper addresses the design and performance evaluation of the downlink segment of cell-free mmWave Massive MIMO system using hybrid precoders under the realistic assumption of capacity-constrained fronthaul links. Towards this end, a hybrid digital-analog beamforming is proposed where the high-dimensional analog part only depends on second-order large scale information. The low-dimensional digital part can then be implemented using standard precoding techniques that rely on instantaneous CSI. Numerical results demonstrate that this reduced-complexity architecture, when combined with an adequate user selection (scheduling), attains excellent Max-Min performance when operating under limited-fronthaul constraints.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gaze Tracking by Joint Head and Eye Pose Estimation Under Free Head Movement.\n \n \n \n \n\n\n \n Cristina, S.; and Camilleri, K. P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"GazePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902786,\n  author = {S. Cristina and K. P. Camilleri},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Gaze Tracking by Joint Head and Eye Pose Estimation Under Free Head Movement},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Recent trends in the field of eye-gaze tracking have been shifting towards the estimation of gaze direction in everyday life settings, hence calling for methods that alleviate the constraints typically associated with existing methods, which limit their applicability in less controlled conditions. In this paper, we propose a method for eye-gaze estimation as a function of both eye and head pose components, without requiring prolonged user-cooperation prior to gaze estimation. Our method exploits the trajectories of salient feature trackers spread randomly over the face region for the estimation of the head rotation angles, which are subsequently used to drive a spherical eye-in-head rotation model that compensates for the changes in eye region appearance under head rotation. We investigate the validity of the proposed method on a publicly available data set.},\n  keywords = {eye;face recognition;feature extraction;gaze tracking;pose estimation;joint head;free head movement;eye-gaze tracking;gaze direction;everyday life settings;controlled conditions;eye-gaze estimation;prolonged user-cooperation;salient feature trackers;head rotation angles;spherical eye-in-head rotation model;eye region appearance;Faces;Estimation;Shape;Tracking;Kalman filters;Feature extraction;Eye-gaze tracking;pervasive;passive},\n  doi = {10.23919/EUSIPCO.2019.8902786},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532682.pdf},\n}\n\n
\n
\n\n\n
\n Recent trends in the field of eye-gaze tracking have been shifting towards the estimation of gaze direction in everyday life settings, hence calling for methods that alleviate the constraints typically associated with existing methods, which limit their applicability in less controlled conditions. In this paper, we propose a method for eye-gaze estimation as a function of both eye and head pose components, without requiring prolonged user-cooperation prior to gaze estimation. Our method exploits the trajectories of salient feature trackers spread randomly over the face region for the estimation of the head rotation angles, which are subsequently used to drive a spherical eye-in-head rotation model that compensates for the changes in eye region appearance under head rotation. We investigate the validity of the proposed method on a publicly available data set.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient Distributed Processing for Large Scale MIMO Detection.\n \n \n \n \n\n\n \n Ouameur, M. A.; and Massicotte, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902787,\n  author = {M. A. Ouameur and D. Massicotte},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient Distributed Processing for Large Scale MIMO Detection},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In large scale multiple-input multiple-output (MIMO), high spectral and energy efficiencies comes at the expense of a high computational complexity baseband processing. Many contributions have been proposed to reduce such complexity using matrix inversion approximation techniques for instance. On the other hand, to reduce the constraint on the interconnects' bandwidth, fewer decentralized processing techniques have emerged. Here, we propose a computationally efficient technique based on embedding one single Gauss-Seidel iteration within every ADMM based detection iteration. The simulations are performed using an LTE-like TDD-OFDM frame structure and waveform, under perfect and non-perfect channel state information (CSI). Early results reveal that the proposed ADMM-GS algorithm can outperform the centralised GS based technique processing in a high SNR region and high load regime. In addition ADMM-GS' performance exhibits relatively less sensitivity to channel estimation error; a characteristic inherited from the centralised GS technique.},\n  keywords = {channel estimation;computational complexity;iterative methods;matrix inversion;MIMO communication;OFDM modulation;wireless channels;high load regime;addition ADMM-GS' performance;centralised GS technique;efficient distributed processing;MIMO detection;high spectral energy efficiencies;high computational complexity baseband processing;matrix inversion approximation techniques;computationally efficient technique;single Gauss-Seidel iteration;detection iteration;nonperfect channel state information;ADMM-GS algorithm;technique processing;high SNR region;decentralized processing techniques;large scale multiple-input multiple-output;Computational complexity;Antennas;Massive MIMO;Uplink;Distributed processing;Large scale multiple-input multiple-output (MIMO);zero forcing (ZF) detection;Maximum ratio combining (MRC);receiver combining;Gauss Seidel (GS);alternating direction method of multipliers (ADMM)},\n  doi = {10.23919/EUSIPCO.2019.8902787},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533752.pdf},\n}\n\n
\n
\n\n\n
\n In large scale multiple-input multiple-output (MIMO), high spectral and energy efficiencies comes at the expense of a high computational complexity baseband processing. Many contributions have been proposed to reduce such complexity using matrix inversion approximation techniques for instance. On the other hand, to reduce the constraint on the interconnects' bandwidth, fewer decentralized processing techniques have emerged. Here, we propose a computationally efficient technique based on embedding one single Gauss-Seidel iteration within every ADMM based detection iteration. The simulations are performed using an LTE-like TDD-OFDM frame structure and waveform, under perfect and non-perfect channel state information (CSI). Early results reveal that the proposed ADMM-GS algorithm can outperform the centralised GS based technique processing in a high SNR region and high load regime. In addition ADMM-GS' performance exhibits relatively less sensitivity to channel estimation error; a characteristic inherited from the centralised GS technique.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Self-Localization of Distributed Microphone Arrays Using Directional Statistics with DoA Estimation Reliability.\n \n \n \n \n\n\n \n Woźniak, S.; Kowalczyk, K.; and Cobos, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Self-LocalizationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902788,\n  author = {S. Woźniak and K. Kowalczyk and M. Cobos},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Self-Localization of Distributed Microphone Arrays Using Directional Statistics with DoA Estimation Reliability},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper addresses the problem of self-localization of distributed microphone arrays from microphone recordings by following a two-step optimization procedure. In the first step, the relative geometry of the sources and arrays is inferred by the proposed maximum likelihood estimator. It is derived under the assumption that the acquired unit-norm vectors pointing towards the unknown source positions follow a von Mises-Fisher distribution in a D-dimensional space. In the second step, the absolute positions and synchronization offsets between the arrays are estimated from the inferred relative geometry by using the Least Squares procedure. To improve the accuracy of the method, we propose as well the use of a reliability measure for the estimated Directions of Arrival based on the presented directional statistics model. The results of numerical experiments confirm the validity of the proposed approach.},\n  keywords = {array signal processing;audio signal processing;direction-of-arrival estimation;geometry;least squares approximations;maximum likelihood estimation;microphone arrays;optimisation;reliability;synchronisation;vectors;distributed microphone arrays;microphone recordings;two-step optimization procedure;maximum likelihood estimator;unit-norm vectors;unknown source positions;von Mises-Fisher distribution;inferred relative geometry;directional statistics model;DoA estimation reliability self-localization;directional statistics;D-dimensional space;synchronization offsets;least squares procedure;direction of arrival estimation;reliability measure;Direction-of-arrival estimation;Geometry;Microphone arrays;Reliability;Maximum likelihood estimation;microphone arrays;wireless acoustic sensor networks;distributed sensor networks;geometry calibration;maximum likelihood;directional statistics;circular statistics},\n  doi = {10.23919/EUSIPCO.2019.8902788},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533828.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the problem of self-localization of distributed microphone arrays from microphone recordings by following a two-step optimization procedure. In the first step, the relative geometry of the sources and arrays is inferred by the proposed maximum likelihood estimator. It is derived under the assumption that the acquired unit-norm vectors pointing towards the unknown source positions follow a von Mises-Fisher distribution in a D-dimensional space. In the second step, the absolute positions and synchronization offsets between the arrays are estimated from the inferred relative geometry by using the Least Squares procedure. To improve the accuracy of the method, we propose as well the use of a reliability measure for the estimated Directions of Arrival based on the presented directional statistics model. The results of numerical experiments confirm the validity of the proposed approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Heart Disease Detection Architecture for Lead I Off-the-Person ECG Monitoring Devices.\n \n \n \n \n\n\n \n Sá, P.; Aidos, H.; Roma, N.; and Tomás, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"HeartPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902791,\n  author = {P. Sá and H. Aidos and N. Roma and P. Tomás},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Heart Disease Detection Architecture for Lead I Off-the-Person ECG Monitoring Devices},\n  year = {2019},\n  pages = {1-5},\n  abstract = {With the rise of smart-watches and other wearables, off-the-person electrocardiography is gaining momentum as high-quality Lead 1 ECG signals can now be acquired from a persons hands or arms. Although several heart disease detection algorithms have been described in recent years, they are not designed considering Lead 1-only setups. This work bridges this gap with an architecture for a robust Lead 1 real-time heart disease detection system and an FPGA-based implementation. The proposed system is based on a signal processing pipeline composed of: ECG signal denoising; heartbeat detection and segmentation; extraction of dynamic morphological features; and heartbeat classification (standard and different abnormal heartbeats). Resorting to the only database from MITs Physiobank with Lead 1 annotated recordings, InCarTDb, the proposed pipeline resulted in a 4-class model with a classification accuracy of up to 96.5%. Moreover, when implemented in a Zynq-7 ZC702 Evaluation Board, the proposed architecture requires less than 30% of the FPGA resources and a total power consumption of 192 mW at a clock frequency of 35 MHz.},\n  keywords = {bioelectric potentials;diseases;electrocardiography;field programmable gate arrays;medical signal detection;medical signal processing;signal classification;signal denoising;wearable devices;high-quality lead 1 ECG signals;lead 1 real-time heart disease detection system;ECG signal processing pipeline;heartbeat classification;heartbeat segmentation;heartbeat detection;ECG signal denoising;FPGA-based implementation;heart disease detection algorithms;off-the-person electrocardiography;smart-watches;off-the-person ECG monitoring devices;heart disease detection architecture;power 192.0 mW;frequency 35.0 MHz;Feature extraction;Electrocardiography;Lead;Heart beat;Computer architecture;ECG analysis;cardiac pathology identification;hardware architecture;real-time processing},\n  doi = {10.23919/EUSIPCO.2019.8902791},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533866.pdf},\n}\n\n
\n
\n\n\n
\n With the rise of smart-watches and other wearables, off-the-person electrocardiography is gaining momentum as high-quality Lead 1 ECG signals can now be acquired from a persons hands or arms. Although several heart disease detection algorithms have been described in recent years, they are not designed considering Lead 1-only setups. This work bridges this gap with an architecture for a robust Lead 1 real-time heart disease detection system and an FPGA-based implementation. The proposed system is based on a signal processing pipeline composed of: ECG signal denoising; heartbeat detection and segmentation; extraction of dynamic morphological features; and heartbeat classification (standard and different abnormal heartbeats). Resorting to the only database from MITs Physiobank with Lead 1 annotated recordings, InCarTDb, the proposed pipeline resulted in a 4-class model with a classification accuracy of up to 96.5%. Moreover, when implemented in a Zynq-7 ZC702 Evaluation Board, the proposed architecture requires less than 30% of the FPGA resources and a total power consumption of 192 mW at a clock frequency of 35 MHz.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Graph Signal Processing Approach to Direction of Arrival Estimation.\n \n \n \n \n\n\n \n Moreira, L. A. S.; Ramos, A. L. L.; de Campos , M. L. R.; Apolinário, J. A.; and Serrenho, F. G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902792,\n  author = {L. A. S. Moreira and A. L. L. Ramos and M. L. R. {de Campos} and J. A. Apolinário and F. G. Serrenho},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Graph Signal Processing Approach to Direction of Arrival Estimation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This work presents a new approach, based on Graph Signal Processing, to estimate the direction of arrival (DoA) of an incoming narrowband signal hitting on an array of sensors. By building directed graphs related to both a uniform linear sensor array and a time series representing the signal at each sensor, we use the concepts of graph product and graph Fourier transform to form an objective function from the coefficients of the signal represented in an eigenvectors basis. Simulation results have shown that the method achieves estimations with competitive precision in comparison to classical DoA estimation methods, and good results being obtained even in presence of multipath and interfering. The proposed method is suitable for parallel implementations and its computational complexity tends to decrease when used repeatedly.},\n  keywords = {computational complexity;directed graphs;direction-of-arrival estimation;eigenvalues and eigenfunctions;Fourier transforms;sensor arrays;signal processing;time series;Graph Signal Processing approach;narrowband signal;uniform linear sensor array;time series;graph product;classical DoA estimation methods;directed graphs;graph Fourier transform;objective function;eigenvectors basis;parallel implementations;computational complexity;Direction-of-arrival estimation;Sensor arrays;Microphones;Fourier transforms;Estimation;Direction of arrival;array signal processing;Graph Fourier Transform;narrowband DoA estimation},\n  doi = {10.23919/EUSIPCO.2019.8902792},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529387.pdf},\n}\n\n
\n
\n\n\n
\n This work presents a new approach, based on Graph Signal Processing, to estimate the direction of arrival (DoA) of an incoming narrowband signal hitting on an array of sensors. By building directed graphs related to both a uniform linear sensor array and a time series representing the signal at each sensor, we use the concepts of graph product and graph Fourier transform to form an objective function from the coefficients of the signal represented in an eigenvectors basis. Simulation results have shown that the method achieves estimations with competitive precision in comparison to classical DoA estimation methods, and good results being obtained even in presence of multipath and interfering. The proposed method is suitable for parallel implementations and its computational complexity tends to decrease when used repeatedly.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Asymptotic Karlin-Rubin’s Theorem with Application to Signal Detection in a Subspace Cone.\n \n \n \n\n\n \n Bourmani, S.; Socheleau, F. -.; and Pastor, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902793,\n  author = {S. Bourmani and F. -X. Socheleau and D. Pastor},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Asymptotic Karlin-Rubin’s Theorem with Application to Signal Detection in a Subspace Cone},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We first propose an asymptotic formulation of Karlin-Rubin's theorem that relies on the weak convergence of a sequence of random vectors to design Asymptotically Uniformly Most Powerful (AUMP) tests dedicated to composite hypotheses. This general property of optimality is then applied to the problem of testing whether the energy of a signal projected onto a known subspace exceeds a specified proportion of its total energy. The signal is assumed unknown deterministic and it is observed in independent and additive white Gaussian noise. Such a problem can arise when the signal to be detected obeys the linear subspace model and when it is corrupted by unknown interference. It can also be relevant in machine learning applications where one wants to check whether an assumed linear model fits the analyzed data. For this problem, where it is shown that no Uniformly Most Powerful (UMP) and no UMP invariant tests exist, an AUMP invariant test is derived.},\n  keywords = {AWGN;convergence;signal detection;vectors;machine learning applications;UMP invariant tests;AUMP invariant test;signal detection;subspace cone;asymptotic formulation;weak convergence;random vectors;composite hypotheses;independent Gaussian noise;additive white Gaussian noise;linear subspace model;unknown interference;asymptotic Karlin-Rubin theorem;asymptotically uniformly most powerful test design;Testing;Interference;Probability density function;Convergence;Signal to noise ratio;Europe},\n  doi = {10.23919/EUSIPCO.2019.8902793},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n We first propose an asymptotic formulation of Karlin-Rubin's theorem that relies on the weak convergence of a sequence of random vectors to design Asymptotically Uniformly Most Powerful (AUMP) tests dedicated to composite hypotheses. This general property of optimality is then applied to the problem of testing whether the energy of a signal projected onto a known subspace exceeds a specified proportion of its total energy. The signal is assumed unknown deterministic and it is observed in independent and additive white Gaussian noise. Such a problem can arise when the signal to be detected obeys the linear subspace model and when it is corrupted by unknown interference. It can also be relevant in machine learning applications where one wants to check whether an assumed linear model fits the analyzed data. For this problem, where it is shown that no Uniformly Most Powerful (UMP) and no UMP invariant tests exist, an AUMP invariant test is derived.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards Automatic Glaucoma Assessment: An Encoder-decoder CNN for Retinal Layer Segmentation in Rodent OCT images.\n \n \n \n \n\n\n \n d. Amor, R.; Morales, S.; n. Colomer, A.; Mossi, J. M.; Woldbye, D.; Klemp, K.; Larsen, M.; and Naranjo, V.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902794,\n  author = {R. d. Amor and S. Morales and A. n. Colomer and J. M. Mossi and D. Woldbye and K. Klemp and M. Larsen and V. Naranjo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Towards Automatic Glaucoma Assessment: An Encoder-decoder CNN for Retinal Layer Segmentation in Rodent OCT images},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Optical coherence tomography (OCT) is an important imaging modality that is used frequently to monitor the state of retinal layers both in humans and animals. Automated OCT analysis in rodents is an important method to study the possible toxic effect of treatments before the test in humans. In this paper, an automatic method to detect the most significant retinal layers in rat OCT images is presented. This algorithm is based on an encoder-decoder fully convolutional network (FCN) architecture combined with a robust method of post-processing. After the validation, it was demonstrated that the proposed method outperforms the commercial Insight image segmentation software. We obtained results (averaged absolute distance error) in the test set for the training database of 2.52 ± 0.80 μm. In the predictions done by the method, in a different database (only used for testing), we also achieve the promising results of 4.45 ± 3.02 μm.},\n  keywords = {biomedical optical imaging;convolutional neural nets;diseases;eye;image coding;image segmentation;medical image processing;optical tomography;automatic glaucoma assessment;encoder-decoder CNN;retinal layer segmentation;rodent OCT images;optical coherence tomography;imaging modality;automated OCT analysis;automatic method;rat OCT;encoder-decoder fully convolutional network architecture;robust method;image segmentation software;toxic effect;Image segmentation;Retina;Training;Rats;Rodents;Databases;Convolution;Optical coherence tomography;rodent OCT;layer segmentation;convolutional neural network;glaucoma assessment},\n  doi = {10.23919/EUSIPCO.2019.8902794},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533863.pdf},\n}\n\n
\n
\n\n\n
\n Optical coherence tomography (OCT) is an important imaging modality that is used frequently to monitor the state of retinal layers both in humans and animals. Automated OCT analysis in rodents is an important method to study the possible toxic effect of treatments before the test in humans. In this paper, an automatic method to detect the most significant retinal layers in rat OCT images is presented. This algorithm is based on an encoder-decoder fully convolutional network (FCN) architecture combined with a robust method of post-processing. After the validation, it was demonstrated that the proposed method outperforms the commercial Insight image segmentation software. We obtained results (averaged absolute distance error) in the test set for the training database of 2.52 ± 0.80 μm. In the predictions done by the method, in a different database (only used for testing), we also achieve the promising results of 4.45 ± 3.02 μm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Random Matrix-Improved Estimation of the Wasserstein Distance between two Centered Gaussian Distributions.\n \n \n \n \n\n\n \n Tiomoko, M.; and Couillet, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RandomPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902795,\n  author = {M. Tiomoko and R. Couillet},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Random Matrix-Improved Estimation of the Wasserstein Distance between two Centered Gaussian Distributions},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This article proposes a method to consistently estimate functionals 1/pΣi=1p f(λi(C1C2)) of the eigenvalues of the product of two covariance matrices C1,C2∈Rp×p based on the empirical estimates λi(Ĉ1Ĉ2)(Ĉa = 1/na Σi=1naxi(a)xi(a)), when the size p and number na of the (zero mean) samples xi(a) are similar. As a corollary, a consistent estimate of the Wasserstein distance (related to the case f(t) = √t) between centered Gaussian distributions is derived. The new estimate is shown to largely outperform the classical sample covariance-based `plug-in' estimator. Based on this finding, a practical application to covariance estimation is then devised which demonstrates potentially significant performance gains with respect to state-of-the-art alternatives.},\n  keywords = {covariance analysis;covariance matrices;eigenvalues and eigenfunctions;Gaussian distribution;random processes;covariance-based plug-in estimator;covariance matrices;centered Gaussian distributions;Wasserstein distance;random matrix-improved estimation;Covariance matrices;Eigenvalues and eigenfunctions;Estimation;Signal processing;Gaussian distribution;Europe;Sociology},\n  doi = {10.23919/EUSIPCO.2019.8902795},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533356.pdf},\n}\n\n
\n
\n\n\n
\n This article proposes a method to consistently estimate functionals 1/pΣi=1p f(λi(C1C2)) of the eigenvalues of the product of two covariance matrices C1,C2∈Rp×p based on the empirical estimates λi(Ĉ1Ĉ2)(Ĉa = 1/na Σi=1naxi(a)xi(a)), when the size p and number na of the (zero mean) samples xi(a) are similar. As a corollary, a consistent estimate of the Wasserstein distance (related to the case f(t) = √t) between centered Gaussian distributions is derived. The new estimate is shown to largely outperform the classical sample covariance-based `plug-in' estimator. Based on this finding, a practical application to covariance estimation is then devised which demonstrates potentially significant performance gains with respect to state-of-the-art alternatives.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Unsupervised Medical Image Translation Using Cycle-MedGAN.\n \n \n \n \n\n\n \n Armanious, K.; Jiang, C.; Abdulatif, S.; Küstner, T.; Gatidis, S.; and Yang, B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"UnsupervisedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902799,\n  author = {K. Armanious and C. Jiang and S. Abdulatif and T. Küstner and S. Gatidis and B. Yang},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Unsupervised Medical Image Translation Using Cycle-MedGAN},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Image-to-image translation is a new field in computer vision with multiple potential applications in the medical domain. However, for supervised image translation frameworks, co-registered datasets, paired in a pixel-wise sense, are required. This is often difficult to acquire in realistic medical scenarios. On the other hand, unsupervised translation frameworks often result in blurred translated images with unrealistic details. In this work, we propose a new unsupervised translation framework which is titled Cycle-MedGAN. The proposed framework utilizes new non-adversarial cycle losses which direct the framework to minimize the textural and perceptual discrepancies in the translated images. Qualitative and quantitative comparisons against other unsupervised translation approaches demonstrate the performance of the proposed framework for PET-CT translation and MR motion correction.},\n  keywords = {biomedical MRI;computer vision;image registration;medical image processing;positron emission tomography;unsupervised medical image translation;Cycle-MedGAN;image-to-image translation;medical domain;supervised image translation frameworks;blurred translated images;nonadversarial cycle losses;computer vision;PET-CT translation;MR motion correction;Feature extraction;Biomedical imaging;Task analysis;Computed tomography;Training;Generators;Signal processing;Medical image translation;Unsupervised Learning;PET-CT;GANs;Motion Correction},\n  doi = {10.23919/EUSIPCO.2019.8902799},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533095.pdf},\n}\n\n
\n
\n\n\n
\n Image-to-image translation is a new field in computer vision with multiple potential applications in the medical domain. However, for supervised image translation frameworks, co-registered datasets, paired in a pixel-wise sense, are required. This is often difficult to acquire in realistic medical scenarios. On the other hand, unsupervised translation frameworks often result in blurred translated images with unrealistic details. In this work, we propose a new unsupervised translation framework which is titled Cycle-MedGAN. The proposed framework utilizes new non-adversarial cycle losses which direct the framework to minimize the textural and perceptual discrepancies in the translated images. Qualitative and quantitative comparisons against other unsupervised translation approaches demonstrate the performance of the proposed framework for PET-CT translation and MR motion correction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hybrid Octree-Plane Point Cloud Geometry Coding.\n \n \n \n \n\n\n \n Dricot, A.; and Ascenso, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"HybridPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902800,\n  author = {A. Dricot and J. Ascenso},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Hybrid Octree-Plane Point Cloud Geometry Coding},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Point clouds provide an efficient way of representing 3D data for applications such as virtual or augmented reality, free-viewpoint video, and gaming. However, this rich, realistic and immersive representation demands a very high amount of data and thus efficient point cloud coding solutions are increasingly needed. A promising way to represent point clouds is through octree partitioning, which provides good compression performance with low complexity, as well as level-of-detail scalability. However, other data representations may offer additional benefits to the octree data structure. The goal of this paper is to propose a point cloud geometry coding solution that leverages the adaptive octree partitioning, and enhances it with a novel coding mode. This mode exploits a plane representation for leaf nodes at different layers of the octree. This corresponds to a hybrid solution where two point cloud representation models are combined. The proposed approach can outperform octree based solutions that are currently considered for standardization and it also provides significant compression gains against a static octree solution (average BD-rate gains of 35% are reported).},\n  keywords = {computational geometry;computer graphics;data compression;octrees;adaptive octree partitioning;coding mode;plane representation;point cloud representation models;octree based solutions;static octree solution;virtual reality;augmented reality;free-viewpoint video;realistic representation;immersive representation;compression performance;data representations;octree data structure;point cloud geometry coding solution;hybrid octree-plane point cloud geometry coding;point cloud coding solutions;level-of-detail scalability;Three-dimensional displays;Encoding;Octrees;Geometry;Two dimensional displays;Surface reconstruction;Decoding;Point Cloud Coding;Octree;Plane},\n  doi = {10.23919/EUSIPCO.2019.8902800},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528058.pdf},\n}\n\n
\n
\n\n\n
\n Point clouds provide an efficient way of representing 3D data for applications such as virtual or augmented reality, free-viewpoint video, and gaming. However, this rich, realistic and immersive representation demands a very high amount of data and thus efficient point cloud coding solutions are increasingly needed. A promising way to represent point clouds is through octree partitioning, which provides good compression performance with low complexity, as well as level-of-detail scalability. However, other data representations may offer additional benefits to the octree data structure. The goal of this paper is to propose a point cloud geometry coding solution that leverages the adaptive octree partitioning, and enhances it with a novel coding mode. This mode exploits a plane representation for leaf nodes at different layers of the octree. This corresponds to a hybrid solution where two point cloud representation models are combined. The proposed approach can outperform octree based solutions that are currently considered for standardization and it also provides significant compression gains against a static octree solution (average BD-rate gains of 35% are reported).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Hyper-Parameter Selection on Convolutional Dictionary Learning Through Local ℓ0,∞ Norm.\n \n \n \n\n\n \n Silva, G.; Quesada, J.; and Rodriguez, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@InProceedings{8902801,\n  author = {G. Silva and J. Quesada and P. Rodriguez},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Hyper-Parameter Selection on Convolutional Dictionary Learning Through Local ℓ0,∞ Norm},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Convolutional dictionary learning (CDL) is a widely used technique in many applications on the signal/image processing and computer vision fields. While many algorithms have been proposed in order to improve the computational run-time performance during the training process, a thorough analysis regarding the direct relationship between the reconstruction performance and the dictionary features (hyper-parameters), such as the filter size and filter bank's cardinality, has not yet been presented.As arbitrarily configured dictionaries do not necessarily guarantee the best possible results during the test process, a correct selection of the hyper-parameters would be very favorable in the training and testing stages. In this context, this works aims to provide an empirical support for the choice of hyper-parameters when learning convolutional dictionaries. We perform a careful analysis of the effect of varying the dictionary's hyper-parameters through a denoising task. Furthermore, we employ a recently proposed local ℓ0,∞ norm as a sparsity measure in order to explore possible correlations between the sparsity induced by the learned filter bank and the reconstruction quality at test stage.},\n  keywords = {channel bank filters;computer vision;convolutional neural nets;feature extraction;image denoising;image reconstruction;learning (artificial intelligence);learned filter bank;hyper-parameter selection;convolutional dictionary learning;computer vision fields;computational run-time performance;reconstruction performance;dictionary features;local ℓ0,∞ norm;denoising task;signal-image processing;Dictionaries;Training;Convolution;Convolutional codes;Noise reduction;Machine learning;Task analysis;Convolutional sparse representation;convolutional dictionary learning;hyper-parameters},\n  doi = {10.23919/EUSIPCO.2019.8902801},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Convolutional dictionary learning (CDL) is a widely used technique in many applications on the signal/image processing and computer vision fields. While many algorithms have been proposed in order to improve the computational run-time performance during the training process, a thorough analysis regarding the direct relationship between the reconstruction performance and the dictionary features (hyper-parameters), such as the filter size and filter bank's cardinality, has not yet been presented.As arbitrarily configured dictionaries do not necessarily guarantee the best possible results during the test process, a correct selection of the hyper-parameters would be very favorable in the training and testing stages. In this context, this works aims to provide an empirical support for the choice of hyper-parameters when learning convolutional dictionaries. We perform a careful analysis of the effect of varying the dictionary's hyper-parameters through a denoising task. Furthermore, we employ a recently proposed local ℓ0,∞ norm as a sparsity measure in order to explore possible correlations between the sparsity induced by the learned filter bank and the reconstruction quality at test stage.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Collecting, Analyzing and Predicting Socially-Driven Image Interestingness.\n \n \n \n \n\n\n \n Berson, E.; Duong, N. Q. K.; and Demarty, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Collecting,Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902803,\n  author = {E. Berson and N. Q. K. Duong and C. Demarty},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Collecting, Analyzing and Predicting Socially-Driven Image Interestingness},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Interestingness has recently become an emerging concept for visual content assessment. However, understanding and predicting image interestingness remains challenging as its judgment is highly subjective and usually context-dependent. In addition, existing datasets are quite small for in-depth analysis. To push forward research in this topic, a large-scale interestingness dataset (images and their associated metadata) is described in this paper and released for public use. We then propose computational models based on deep learning to predict image interestingness. We show that exploiting relevant contextual information derived from social metadata could greatly improve the prediction results. Finally we discuss some key findings and potential research directions for this emerging topic.},\n  keywords = {convolutional neural nets;image processing;learning (artificial intelligence);meta data;social networking (online);visual content assessment;large-scale interestingness dataset;social metadata;image interestingness;deep learning;Flickr;Feature extraction;Metadata;Computational modeling;Visualization;Semantics;Predictive models;Image interestingness;content and social interestingness;Flickr;LaFin dataset;contextual information;deep learning.},\n  doi = {10.23919/EUSIPCO.2019.8902803},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533561.pdf},\n}\n\n
\n
\n\n\n
\n Interestingness has recently become an emerging concept for visual content assessment. However, understanding and predicting image interestingness remains challenging as its judgment is highly subjective and usually context-dependent. In addition, existing datasets are quite small for in-depth analysis. To push forward research in this topic, a large-scale interestingness dataset (images and their associated metadata) is described in this paper and released for public use. We then propose computational models based on deep learning to predict image interestingness. We show that exploiting relevant contextual information derived from social metadata could greatly improve the prediction results. Finally we discuss some key findings and potential research directions for this emerging topic.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Realtime 2-D DOA Estimation using Phase-Difference Projection (PDP).\n \n \n \n \n\n\n \n Chen, H.; Ballal, T.; Liu, X.; and Al-Naffouri, T. Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RealtimePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902804,\n  author = {H. Chen and T. Ballal and X. Liu and T. Y. Al-Naffouri},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Realtime 2-D DOA Estimation using Phase-Difference Projection (PDP)},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Estimating the direction of arrival (DOA) information of a signal is important for communications, localization and navigation systems. Time-delay based methods are popular DOA algorithms that can estimate DOA with a minimal number of receivers. Time delay can be measured with subsample accuracy using phase-difference based methods. Phase-wrapping represents a major challenge for time delay estimation that occurs when inter-sensor spacing is large. Several methods exist for phase-unwrapping; the most successful methods are those search methods, which are time-consuming and do not lend themselves to theoretical analysis. In this paper, we present a phase-difference projection (PDP) method for DOA estimation which is capable of delivering more accurate results with reduced computational complexity. The proposed method has been tested and compared with several benchmark algorithms in both simulations and experiments. The results show that, at a signal-to-noise ratio (SNR) of -18 dB, using the proposed PDP algorithm, the percentage of the DOA estimates with errors smaller than <; 5° is 54%, and it reaches 100% at SNR = -7dB. This performance is not matched by the benchmark methods. For the utility test, we implemented this algorithm to realize an ultrasound-based air-mouse and it achieves satisfactory user experiences when using Google Maps, or playing some interactive games.},\n  keywords = {computational complexity;delay estimation;direction-of-arrival estimation;realtime 2-d;DOA estimation;direction of arrival;localization;navigation systems;time-delay;popular DOA algorithms;phase-difference based methods;phase-wrapping;time delay estimation;inter-sensor spacing;phase-unwrapping;search methods;phase-difference projection method;benchmark algorithms;signal-to-noise ratio;PDP algorithm;benchmark methods;noise figure -18.0 dB;noise figure 7.0 dB;Direction-of-arrival estimation;Estimation;Sensor arrays;Receivers;Signal to noise ratio;Noise measurement},\n  doi = {10.23919/EUSIPCO.2019.8902804},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534097.pdf},\n}\n\n
\n
\n\n\n
\n Estimating the direction of arrival (DOA) information of a signal is important for communications, localization and navigation systems. Time-delay based methods are popular DOA algorithms that can estimate DOA with a minimal number of receivers. Time delay can be measured with subsample accuracy using phase-difference based methods. Phase-wrapping represents a major challenge for time delay estimation that occurs when inter-sensor spacing is large. Several methods exist for phase-unwrapping; the most successful methods are those search methods, which are time-consuming and do not lend themselves to theoretical analysis. In this paper, we present a phase-difference projection (PDP) method for DOA estimation which is capable of delivering more accurate results with reduced computational complexity. The proposed method has been tested and compared with several benchmark algorithms in both simulations and experiments. The results show that, at a signal-to-noise ratio (SNR) of -18 dB, using the proposed PDP algorithm, the percentage of the DOA estimates with errors smaller than <; 5° is 54%, and it reaches 100% at SNR = -7dB. This performance is not matched by the benchmark methods. For the utility test, we implemented this algorithm to realize an ultrasound-based air-mouse and it achieves satisfactory user experiences when using Google Maps, or playing some interactive games.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Deep Log-Likelihood Ratio Quantization.\n \n \n \n \n\n\n \n Arvinte, M.; Tewfik, A. H.; and Vishwanath, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DeepPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902805,\n  author = {M. Arvinte and A. H. Tewfik and S. Vishwanath},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Deep Log-Likelihood Ratio Quantization},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this work, a deep learning-based method for log-likelihood ratio (LLR) lossy compression and quantization is proposed, with emphasis on a single-input single-output uncorrelated fading communication setting. A deep autoencoder network is trained to compress, quantize and reconstruct the bit log-likelihood ratios corresponding to a single transmitted symbol. Specifically, the encoder maps to a latent space with dimension equal to the number of sufficient statistics required to recover the inputs - equal to three in this case - while the decoder aims to reconstruct a noisy version of the latent representation with the purpose of modeling quantization effects in a differentiable way. Simulation results show that, when applied to a standard rate-1/2 low-density parity-check (LDPC) code, a finite precision compression factor of nearly three times is achieved when storing an entire codeword, with an incurred loss of performance lower than 0.15 dB compared to straightforward scalar quantization of the log-likelihood ratios and the method is competitive with state-of-the-art approaches.},\n  keywords = {channel coding;fading channels;learning (artificial intelligence);neural nets;parity check codes;quantisation (signal);signal reconstruction;signal representation;SISO communication;telecommunication computing;finite precision compression factor;scalar quantization;deep log-likelihood ratio quantization;deep learning-based method;single-input single-output uncorrelated fading communication setting;deep autoencoder network;bit log-likelihood ratios;single transmitted symbol;low-density parity-check code;LDPC code;Quantization (signal);Training;Decoding;Signal to noise ratio;Computer architecture;Gaussian noise;Parity check codes},\n  doi = {10.23919/EUSIPCO.2019.8902805},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533405.pdf},\n}\n\n
\n
\n\n\n
\n In this work, a deep learning-based method for log-likelihood ratio (LLR) lossy compression and quantization is proposed, with emphasis on a single-input single-output uncorrelated fading communication setting. A deep autoencoder network is trained to compress, quantize and reconstruct the bit log-likelihood ratios corresponding to a single transmitted symbol. Specifically, the encoder maps to a latent space with dimension equal to the number of sufficient statistics required to recover the inputs - equal to three in this case - while the decoder aims to reconstruct a noisy version of the latent representation with the purpose of modeling quantization effects in a differentiable way. Simulation results show that, when applied to a standard rate-1/2 low-density parity-check (LDPC) code, a finite precision compression factor of nearly three times is achieved when storing an entire codeword, with an incurred loss of performance lower than 0.15 dB compared to straightforward scalar quantization of the log-likelihood ratios and the method is competitive with state-of-the-art approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-resolution Reconstruction Algorithm for Phase Retrieval in X-ray Crystallography.\n \n \n \n \n\n\n \n Angarita, J.; Pinilla, S.; Garcia, H.; and Arguello, H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-resolutionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902808,\n  author = {J. Angarita and S. Pinilla and H. Garcia and H. Arguello},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-resolution Reconstruction Algorithm for Phase Retrieval in X-ray Crystallography},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Phase Retrieval (PR) in X-ray Crystallography (XC) is an inverse problem that consists on recovering an image from phaseless data. Recently, it has been shown that an image in XC can be sparsely represented in the Fourier domain. This fact implies that the number of required measurements to retrieve the phase in XC is determined by the sparsity, which is much smaller than the size of the image. However, the computational complexity to retrieve the phase still depends on the image size, implying more time to solve this problem in XC. Therefore, this work proposes a reconstruction algorithm that exploits the sparsity of the image by grouping sets of pixels of its sparse representation, called super-pixels, in order to reduce the total number of unknowns in the inverse problem. The proposed recovery methodology leads to a reduction in time of at least 80% and improves the reconstruction quality in up to 6% in terms of the Structural Similarity Index Measure (SSIM) compared to state-of-art counterparts.},\n  keywords = {image reconstruction;image representation;image resolution;inverse problems;X-ray crystallography;required measurements;XC;image size;sparse representation;inverse problem;reconstruction quality;multiresolution reconstruction algorithm;X-ray crystallography;phase retrieval;phaseless data;Fourier domain;super-pixels;structural similarity index measure;SSIM;Image reconstruction;Diffraction;Sparse matrices;X-ray diffraction;Apertures;Reconstruction algorithms;Extraterrestrial measurements},\n  doi = {10.23919/EUSIPCO.2019.8902808},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532949.pdf},\n}\n\n
\n
\n\n\n
\n Phase Retrieval (PR) in X-ray Crystallography (XC) is an inverse problem that consists on recovering an image from phaseless data. Recently, it has been shown that an image in XC can be sparsely represented in the Fourier domain. This fact implies that the number of required measurements to retrieve the phase in XC is determined by the sparsity, which is much smaller than the size of the image. However, the computational complexity to retrieve the phase still depends on the image size, implying more time to solve this problem in XC. Therefore, this work proposes a reconstruction algorithm that exploits the sparsity of the image by grouping sets of pixels of its sparse representation, called super-pixels, in order to reduce the total number of unknowns in the inverse problem. The proposed recovery methodology leads to a reduction in time of at least 80% and improves the reconstruction quality in up to 6% in terms of the Structural Similarity Index Measure (SSIM) compared to state-of-art counterparts.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improving singing voice separation using Deep U-Net and Wave-U-Net with data augmentation.\n \n \n \n \n\n\n \n Cohen-Hadria, A.; Roebel, A.; and Peeters, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902810,\n  author = {A. Cohen-Hadria and A. Roebel and G. Peeters},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Improving singing voice separation using Deep U-Net and Wave-U-Net with data augmentation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {State-of-the-art singing voice separation is based on deep learning making use of CNN structures with skip connections (like U-Net model, Wave-U-Net model, or MSDENSELSTM). A key to the success of these models is the availability of a large amount of training data. In the following study, we are interested in singing voice separation for mono signals and will investigate into comparing the U-Net and the Wave-U-Net that are structurally similar, but work on different input representations. First, we report a few results on variations of the U-Net model. Second, we will discuss the potential of state of the art speech and music transformation algorithms for augmentation of existing data sets and demonstrate that the effect of these augmentations depends on the signal representations used by the model. The results demonstrate a considerable improvement due to the augmentation for both models. But pitch transposition is the most effective augmentation strategy for the U-Net model, while transposition, time stretching, and formant shifting have a much more balanced effect on the Wave-U-Net model. Finally, we compare the two models on the same dataset.},\n  keywords = {acoustic signal processing;audio recording;audio signal processing;learning (artificial intelligence);music;signal representation;source separation;speech processing;deep U-Net;data augmentation;singing voice separation;deep learning making use;Wave-U-Net model;training data;speech;music transformation algorithms;augmentation strategy;U;Spectrogram;Training;Decoding;Data models;Biological system modeling;Source separation;Singing voice separation;data augmentation;convolutional neural network},\n  doi = {10.23919/EUSIPCO.2019.8902810},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533804.pdf},\n}\n\n
\n
\n\n\n
\n State-of-the-art singing voice separation is based on deep learning making use of CNN structures with skip connections (like U-Net model, Wave-U-Net model, or MSDENSELSTM). A key to the success of these models is the availability of a large amount of training data. In the following study, we are interested in singing voice separation for mono signals and will investigate into comparing the U-Net and the Wave-U-Net that are structurally similar, but work on different input representations. First, we report a few results on variations of the U-Net model. Second, we will discuss the potential of state of the art speech and music transformation algorithms for augmentation of existing data sets and demonstrate that the effect of these augmentations depends on the signal representations used by the model. The results demonstrate a considerable improvement due to the augmentation for both models. But pitch transposition is the most effective augmentation strategy for the U-Net model, while transposition, time stretching, and formant shifting have a much more balanced effect on the Wave-U-Net model. Finally, we compare the two models on the same dataset.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Training Variational Autoencoders with Discrete Latent Variables Using Importance Sampling.\n \n \n \n \n\n\n \n Bartler, A.; Wiewel, F.; Mauch, L.; and Yang, B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TrainingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902811,\n  author = {A. Bartler and F. Wiewel and L. Mauch and B. Yang},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Training Variational Autoencoders with Discrete Latent Variables Using Importance Sampling},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The Variational Autoencoder (VAE) is a popular generative latent variable model that is often used for representation learning. Standard VAEs assume continuous-valued latent variables and are trained by maximization of the evidence lower bound (ELBO). Conventional methods obtain a differentiable estimate of the ELBO with reparametrized sampling and optimize it with Stochastic Gradient Descend (SGD). However, this is not possible if we want to train VAEs with discrete-valued latent variables, since reparametrized sampling is not possible. In this paper, we propose an easy method to train VAEs with binary or categorically valued latent representations. Therefore, we use a differentiable estimator for the ELBO which is based on importance sampling. In experiments, we verify the approach and train two different VAEs architectures with Bernoulli and categorically distributed latent representations on two different benchmark datasets.},\n  keywords = {approximation theory;Gaussian processes;gradient methods;image coding;importance sampling;learning (artificial intelligence);maximum likelihood estimation;stochastic processes;ELBO;reparametrized sampling;VAE;discrete-valued latent variables;binary valued latent representations;categorically valued latent representations;differentiable estimator;importance sampling;VAE architectures;categorically distributed latent representations;discrete latent variables;generative latent variable model;representation learning;stochastic gradient descend;variational autoencoder training;Decoding;Training;Monte Carlo methods;Signal processing;Europe;Standards;Stochastic processes;variational autoencoder;discrete latent variables;importance sampling},\n  doi = {10.23919/EUSIPCO.2019.8902811},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531485.pdf},\n}\n\n
\n
\n\n\n
\n The Variational Autoencoder (VAE) is a popular generative latent variable model that is often used for representation learning. Standard VAEs assume continuous-valued latent variables and are trained by maximization of the evidence lower bound (ELBO). Conventional methods obtain a differentiable estimate of the ELBO with reparametrized sampling and optimize it with Stochastic Gradient Descend (SGD). However, this is not possible if we want to train VAEs with discrete-valued latent variables, since reparametrized sampling is not possible. In this paper, we propose an easy method to train VAEs with binary or categorically valued latent representations. Therefore, we use a differentiable estimator for the ELBO which is based on importance sampling. In experiments, we verify the approach and train two different VAEs architectures with Bernoulli and categorically distributed latent representations on two different benchmark datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Scheduling Data Embedding in Dual Function Radar Networks.\n \n \n \n \n\n\n \n Amin, M. G.; Dong, Y.; and Fabrizio, G. A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SchedulingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902812,\n  author = {M. G. Amin and Y. Dong and G. A. Fabrizio},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Scheduling Data Embedding in Dual Function Radar Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In radar networks, radars may share and broadcast their respective scheduling data. In this respect, communication signals emitted from a radar platform can convey radar signal and beam characteristics. In RF restricted operation environments and towards achieving a unified aperture and bandwidth, it would be desirable to embed such information in radar pulses without having to establish any communications link. In this paper, we consider dual system functionality in radar networks in which one system function enables the other. The focus is on scheduling data that can be reasonably encoded by radar pulses within one radar coherent processing interval (CPI). In order to limit changes in the radar waveform to a minimum, we use up- and down-chirps for information embedding. We consider two different signal embedding strategies in which each radar pulse represents one bit. Information deciphering at the downlink radar receivers is delineated, along with the corresponding probability of bit error assuming a Gaussian channel. The overall channel coding paradigm using radar chirps as parity bits is discussed.},\n  keywords = {channel coding;error statistics;Gaussian channels;radar receivers;radar signal processing;telecommunication scheduling;dual system functionality;radar coherent processing interval;information embedding;downlink radar receivers;scheduling data embedding;dual function radar networks;beam characteristics;RF restricted operation environments;radar chirp signal waveform;CPI;communication signal embedding strategies;bit error probability;Gaussian channel;Radar;Chirp;Bandwidth;Receivers;Dictionaries;Channel coding},\n  doi = {10.23919/EUSIPCO.2019.8902812},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533982.pdf},\n}\n\n
\n
\n\n\n
\n In radar networks, radars may share and broadcast their respective scheduling data. In this respect, communication signals emitted from a radar platform can convey radar signal and beam characteristics. In RF restricted operation environments and towards achieving a unified aperture and bandwidth, it would be desirable to embed such information in radar pulses without having to establish any communications link. In this paper, we consider dual system functionality in radar networks in which one system function enables the other. The focus is on scheduling data that can be reasonably encoded by radar pulses within one radar coherent processing interval (CPI). In order to limit changes in the radar waveform to a minimum, we use up- and down-chirps for information embedding. We consider two different signal embedding strategies in which each radar pulse represents one bit. Information deciphering at the downlink radar receivers is delineated, along with the corresponding probability of bit error assuming a Gaussian channel. The overall channel coding paradigm using radar chirps as parity bits is discussed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n State Space Models with Dynamical and Sparse Variances, and Inference by EM Message Passing.\n \n \n \n \n\n\n \n Wadehn, F.; Weber, T.; and Loeliger, H. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"StatePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902815,\n  author = {F. Wadehn and T. Weber and H. -A. Loeliger},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {State Space Models with Dynamical and Sparse Variances, and Inference by EM Message Passing},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Sparse Bayesian learning (SBL) is a probabilistic approach to estimation problems based on representing sparsity-promoting priors by Normals with Unknown Variances. This representation blends well with linear Gaussian state space models (SSMs). However, in classical SBL the unknown variances are a priori independent, which is not suited for modeling group sparse signals, or signals whose variances have structure. To model signals with, e.g., exponentially decaying or piecewise-constant (in particular block-sparse) variances, we propose SSMs with dynamical and sparse variances (SSM-DSV). These are two-layer SSMs, where the bottom layer models physical signals, and the top layer models dynamical variances that are subject to abrupt changes. Inference and learning in these hierarchical models is performed with a message passing version of the expectation maximization (EM) algorithm, which is a special instance of the more general class of variational message passing algorithms. We validated the proposed model and estimation algorithm with two applications, using both simulated and real data. First, we implemented a block-outlier insensitive Kalman smoother by modeling the disturbance process with a SSM-DSV. Second, we used SSM-DSV to model the oculomotor system and employed EM-message passing for estimating neural controller signals from eye position data.},\n  keywords = {Bayes methods;expectation-maximisation algorithm;Gaussian processes;Kalman filters;learning (artificial intelligence);message passing;state-space methods;representing sparsity-promoting priors;unknown variances;linear Gaussian state space models;classical SBL;group sparse signals;model signals;particular block-sparse;sparse variances;SSM-DSV;two-layer SSMs;bottom layer models;top layer models dynamical;inference;hierarchical models;message passing version;variational message passing algorithms;estimation algorithm;EM-message passing;neural controller signals;EM message passing;Sparse Bayesian learning;probabilistic approach;Message passing;Signal processing algorithms;Biological system modeling;Estimation;Kalman filters;Inference algorithms;Expectation maximization;factor graphs;hierarchical state space models;sparse Bayesian learning.},\n  doi = {10.23919/EUSIPCO.2019.8902815},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533265.pdf},\n}\n\n
\n
\n\n\n
\n Sparse Bayesian learning (SBL) is a probabilistic approach to estimation problems based on representing sparsity-promoting priors by Normals with Unknown Variances. This representation blends well with linear Gaussian state space models (SSMs). However, in classical SBL the unknown variances are a priori independent, which is not suited for modeling group sparse signals, or signals whose variances have structure. To model signals with, e.g., exponentially decaying or piecewise-constant (in particular block-sparse) variances, we propose SSMs with dynamical and sparse variances (SSM-DSV). These are two-layer SSMs, where the bottom layer models physical signals, and the top layer models dynamical variances that are subject to abrupt changes. Inference and learning in these hierarchical models is performed with a message passing version of the expectation maximization (EM) algorithm, which is a special instance of the more general class of variational message passing algorithms. We validated the proposed model and estimation algorithm with two applications, using both simulated and real data. First, we implemented a block-outlier insensitive Kalman smoother by modeling the disturbance process with a SSM-DSV. Second, we used SSM-DSV to model the oculomotor system and employed EM-message passing for estimating neural controller signals from eye position data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n NLS Algorithm for Kronecker-Structured Linear Systems with a CPD Constrained Solution.\n \n \n \n \n\n\n \n Boussé, M.; Sidiropoulos, N.; and Lathauwer, L. D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"NLSPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902816,\n  author = {M. Boussé and N. Sidiropoulos and L. D. Lathauwer},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {NLS Algorithm for Kronecker-Structured Linear Systems with a CPD Constrained Solution},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In various applications within signal processing, system identification, pattern recognition, and scientific computing, the canonical polyadic decomposition (CPD) of a higher-order tensor is only known via general linear measurements. In this paper, we show that the computation of such a CPD can be reformulated as a sum of CPDs with linearly constrained factor matrices by assuming that the measurement matrix can be approximated by a sum of a (small) number of Kronecker products. By properly exploiting the hypothesized structure, we can derive an efficient non-linear least squares algorithm, allowing us to tackle large-scale problems.},\n  keywords = {least squares approximations;linear systems;matrix decomposition;pattern recognition;signal processing;tensors;NLS algorithm;Kronecker-structured linear systems;CPD constrained solution;signal processing;system identification;pattern recognition;scientific computing;canonical polyadic decomposition;higher-order tensor;linearly constrained factor matrices;measurement matrix;nonlinear least squares algorithm;Tensors;Signal processing algorithms;Jacobian matrices;Signal processing;Linear systems;Computational complexity;Matrix decomposition},\n  doi = {10.23919/EUSIPCO.2019.8902816},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528074.pdf},\n}\n\n
\n
\n\n\n
\n In various applications within signal processing, system identification, pattern recognition, and scientific computing, the canonical polyadic decomposition (CPD) of a higher-order tensor is only known via general linear measurements. In this paper, we show that the computation of such a CPD can be reformulated as a sum of CPDs with linearly constrained factor matrices by assuming that the measurement matrix can be approximated by a sum of a (small) number of Kronecker products. By properly exploiting the hypothesized structure, we can derive an efficient non-linear least squares algorithm, allowing us to tackle large-scale problems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-Exponential Relaxation Times Maps Reconstruction and Unsupervised Classification in Magnitude Magnetic Resonance Imaging.\n \n \n \n \n\n\n \n HAJJ, C. E.; MOUSSAOUI, S.; COLLEWET, G.; and MUSSE, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-ExponentialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902817,\n  author = {C. E. HAJJ and S. MOUSSAOUI and G. COLLEWET and M. MUSSE},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-Exponential Relaxation Times Maps Reconstruction and Unsupervised Classification in Magnitude Magnetic Resonance Imaging},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In clinical and biological applications of T2 relaxometry, a multi-exponential decay model proved to be representative of the relaxation signal inside each voxel of the MRI images. However, estimating and exploiting the model parameters for magnitude data is a large-scale ill- posed inverse problem. This paper presents a parameter estimation method that combines a spatial regularization with a Maximum-Likelihood criterion based on the Rician distribution of the noise. In order to properly carry out the estimation on the image level, a Majorization-Minimization approach is implemented alongside an adapted non-linear least-squares algorithm. We propose a method for exploiting the reconstructed maps by clustering the parameters using a K-means classification algorithm applied to the extracted relaxation time and amplitude maps. The method is illustrated on real MRI data of food sample analysis.},\n  keywords = {biomedical MRI;image classification;image reconstruction;inverse problems;maximum likelihood estimation;medical image processing;parameter estimation;unsupervised classification;clinical applications;biological applications;multi-exponential decay model;MRI images;parameter estimation method;spatial regularization;maximum-likelihood criterion;Rician distribution;image level;K-means classification algorithm;MRI data;relaxation time map extraction;relaxation amplitude map extraction;Majorization-minimization approach;large-scale ill- posed inverse problem;mul-tiexponential relaxation times maps reconstruction;magnitude magnetic resonance imaging;non-linear least-squares algorithm;food sample analysis;Signal processing algorithms;Magnetic resonance imaging;Rician channels;Signal to noise ratio;Maximum likelihood estimation;Clustering algorithms;Europe;MRI;multi-exponential model;Maximum-Likelihood;Majorization-Minimization;K-means},\n  doi = {10.23919/EUSIPCO.2019.8902817},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533668.pdf},\n}\n\n
\n
\n\n\n
\n In clinical and biological applications of T2 relaxometry, a multi-exponential decay model proved to be representative of the relaxation signal inside each voxel of the MRI images. However, estimating and exploiting the model parameters for magnitude data is a large-scale ill- posed inverse problem. This paper presents a parameter estimation method that combines a spatial regularization with a Maximum-Likelihood criterion based on the Rician distribution of the noise. In order to properly carry out the estimation on the image level, a Majorization-Minimization approach is implemented alongside an adapted non-linear least-squares algorithm. We propose a method for exploiting the reconstructed maps by clustering the parameters using a K-means classification algorithm applied to the extracted relaxation time and amplitude maps. The method is illustrated on real MRI data of food sample analysis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reducing the Bias in DRSS-Based Localization: An Instrumental Variable Approach.\n \n \n \n \n\n\n \n Li, J.; Doğançay, K.; Nguyen, N. H.; and Law, Y. W.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ReducingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902818,\n  author = {J. Li and K. Doğançay and N. H. Nguyen and Y. W. Law},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Reducing the Bias in DRSS-Based Localization: An Instrumental Variable Approach},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper proposes a closed-form solution with reduced bias for differential received signal strength (DRSS) localization. During the linearization of DRSS measurement equations, the measurement noise is injected into the measurement data matrix, resulting in a correlation between the measurement noise and measurement data matrix. Existing closed-form solutions do not consider this correlation, which causes biased estimation results. The solution proposed here aims to eliminate the bias by introducing instrument variables (IV), whose role is to mitigate the correlation arising from linearization. Simulation results demonstrate the improved performance of the IV-based estimator over some existing closed-form solutions, in the form of root-mean-squared errors that are close to the Cramér-Rao lower bound, and significantly reduced bias, over a wide range of noise levels.},\n  keywords = {correlation methods;least squares approximations;mean square error methods;radio receivers;instrument variables;linearization;IV-based estimator;closed-form solutions;reduced bias;noise levels;DRSS-based localization;instrumental variable approach;closed-form solution;differential received signal strength localization;DRSS measurement equations;measurement noise;measurement data matrix;Sensors;Noise measurement;Maximum likelihood estimation;Closed-form solutions;Correlation;Wireless sensor networks;Differential received signal strength;localization;instrumental variable;best linear unbiased estimator},\n  doi = {10.23919/EUSIPCO.2019.8902818},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528652.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a closed-form solution with reduced bias for differential received signal strength (DRSS) localization. During the linearization of DRSS measurement equations, the measurement noise is injected into the measurement data matrix, resulting in a correlation between the measurement noise and measurement data matrix. Existing closed-form solutions do not consider this correlation, which causes biased estimation results. The solution proposed here aims to eliminate the bias by introducing instrument variables (IV), whose role is to mitigate the correlation arising from linearization. Simulation results demonstrate the improved performance of the IV-based estimator over some existing closed-form solutions, in the form of root-mean-squared errors that are close to the Cramér-Rao lower bound, and significantly reduced bias, over a wide range of noise levels.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Data Augmentation Using Generative Adversarial Network for Environmental Sound Classification.\n \n \n \n \n\n\n \n Madhu, A.; and Kumaraswamy, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DataPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902819,\n  author = {A. Madhu and S. Kumaraswamy},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Data Augmentation Using Generative Adversarial Network for Environmental Sound Classification},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Various types of deep learning architecture have been steadily gaining impetus for automatic environmental sound classification. However, the relative paucity of publicly accessible dataset hinders any further improvement in this direction. This work has two principal contributions. First, we put forward a deep learning framework employing convolutional neural network for automatic environmental sound classification. Second, we investigate the possibility of generating synthetic data using data augmentation. We suggest a novel technique for audio data augmentation using a generative adversarial network (GAN). The proposed model along with data augmentation is assessed on the UrbanSound8K dataset. The results authenticate that the suggested method surpasses state-of-the-art methods for data augmentation.},\n  keywords = {acoustic signal processing;convolutional neural nets;environmental science computing;learning (artificial intelligence);signal classification;audio data augmentation;generative adversarial network;deep learning architecture;automatic environmental sound classification;publicly accessible dataset;convolutional neural network;synthetic data;Convolution;Generative adversarial networks;Training;Deep learning;Gallium nitride;Spectrogram;Dynamic range;data augmentation;generative adversarial network;deep learning;environmental sound classification},\n  doi = {10.23919/EUSIPCO.2019.8902819},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570520684.pdf},\n}\n\n
\n
\n\n\n
\n Various types of deep learning architecture have been steadily gaining impetus for automatic environmental sound classification. However, the relative paucity of publicly accessible dataset hinders any further improvement in this direction. This work has two principal contributions. First, we put forward a deep learning framework employing convolutional neural network for automatic environmental sound classification. Second, we investigate the possibility of generating synthetic data using data augmentation. We suggest a novel technique for audio data augmentation using a generative adversarial network (GAN). The proposed model along with data augmentation is assessed on the UrbanSound8K dataset. The results authenticate that the suggested method surpasses state-of-the-art methods for data augmentation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Permutation Alignment Based on MUSIC Spectrum Discrepancy for Blind Source Separation.\n \n \n \n \n\n\n \n Tachioka, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"PermutationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902820,\n  author = {Y. Tachioka},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Permutation Alignment Based on MUSIC Spectrum Discrepancy for Blind Source Separation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Conventional time-frequency-domain blind source separation (BSS) requires permutation alignment of the sound sources. Permutation alignment methods can be classified into two types: those that use the direction of arrival (DOA) constraints and those that model the sound source characteristics instead of DOA constraints. Multi-channel non-negative matrix factorization (MNMF), which is based on the second type, is one of the most effective BSS methods. However, our experiments revealed that its permutation alignment sometimes fails due to the lack of a DOA constraint. We present a permutation alignment method based on the DOAs directly obtained from a spatial correlation matrix by using multiple signal classification (MUSIC) and that solves the permutation problems by minimizing the discrepancy of the MUSIC spectra, which belong to the same source, in the middle of the BSS algorithm. Our proposed method boosts the second type with a help of the DOA constraint and can be applied in a blind manner to both the mixing system approach, e.g., MNMF, and the demixing system approach, e.g., independent low-rank matrix analysis. Experiments showed that the proposed method is effective for both approaches.},\n  keywords = {blind source separation;direction-of-arrival estimation;matrix decomposition;signal classification;time-frequency analysis;MUSIC spectrum discrepancy;sound sources;permutation alignment method;arrival constraints;sound source characteristics;DOA constraint;multichannel nonnegative matrix factorization;BSS methods;permutation problems;time-frequency-domain blind source separation;Multiple signal classification;Direction-of-arrival estimation;Correlation;Signal processing algorithms;Microphones;Blind source separation;blind source separation;permutation alignment;direction of arrival estimation;multiple signal classification},\n  doi = {10.23919/EUSIPCO.2019.8902820},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570526974.pdf},\n}\n\n
\n
\n\n\n
\n Conventional time-frequency-domain blind source separation (BSS) requires permutation alignment of the sound sources. Permutation alignment methods can be classified into two types: those that use the direction of arrival (DOA) constraints and those that model the sound source characteristics instead of DOA constraints. Multi-channel non-negative matrix factorization (MNMF), which is based on the second type, is one of the most effective BSS methods. However, our experiments revealed that its permutation alignment sometimes fails due to the lack of a DOA constraint. We present a permutation alignment method based on the DOAs directly obtained from a spatial correlation matrix by using multiple signal classification (MUSIC) and that solves the permutation problems by minimizing the discrepancy of the MUSIC spectra, which belong to the same source, in the middle of the BSS algorithm. Our proposed method boosts the second type with a help of the DOA constraint and can be applied in a blind manner to both the mixing system approach, e.g., MNMF, and the demixing system approach, e.g., independent low-rank matrix analysis. Experiments showed that the proposed method is effective for both approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Experimental Evaluation of the Reconfigurable Photodetector for Blind Interference Alignment in Visible Light Communications.\n \n \n \n \n\n\n \n Morales-Céspedes, M.; Quidan, A. A.; and Armada, A. G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ExperimentalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902821,\n  author = {M. Morales-Céspedes and A. A. Quidan and A. G. Armada},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Experimental Evaluation of the Reconfigurable Photodetector for Blind Interference Alignment in Visible Light Communications},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Visible light communications (VLC) have been proposed as an alternative to the radio-frequency (RF) communications because of the huge unlicensed bandwidth available and a low cost implementation. In this context, one of the most interesting applications of VLC is their use in vehicular communications using the headlights of trucks, cars or motorbikes. However, the vehicular communications are subject to blocking from other elements of the network or weather effects such as rain or fog. To solve these issues we propose the use of multi-photodiode receivers following an angle diversity distribution. In particular, in this work we focus on a configuration that considers a single signal processing chain connected to the set of photodiodes through a selector referred to as reconfigurable photodetector. First, we obtain the experimental evaluation of the optical channel considering high-power optical transmitters and several angle diversity configurations. After that, an offline evaluation of the blind interference alignment (BIA) scheme is carried out in comparison with a diversity scheme such as maximum ratio combining (MRC).},\n  keywords = {blind source separation;diversity reception;free-space optical communication;interference (signal);optical communication equipment;optical information processing;photodetectors;vehicular ad hoc networks;maximum ratio combining;high-power optical transmitters;signal processing chain;angle diversity distribution;multiple photodiode receivers;vehicular communications;visible light communications;blind interference alignment;reconfigurable photodetector;Optical transmitters;Photodiodes;Nonlinear optics;Interference;Receivers;Visible light communication},\n  doi = {10.23919/EUSIPCO.2019.8902821},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533112.pdf},\n}\n\n
\n
\n\n\n
\n Visible light communications (VLC) have been proposed as an alternative to the radio-frequency (RF) communications because of the huge unlicensed bandwidth available and a low cost implementation. In this context, one of the most interesting applications of VLC is their use in vehicular communications using the headlights of trucks, cars or motorbikes. However, the vehicular communications are subject to blocking from other elements of the network or weather effects such as rain or fog. To solve these issues we propose the use of multi-photodiode receivers following an angle diversity distribution. In particular, in this work we focus on a configuration that considers a single signal processing chain connected to the set of photodiodes through a selector referred to as reconfigurable photodetector. First, we obtain the experimental evaluation of the optical channel considering high-power optical transmitters and several angle diversity configurations. After that, an offline evaluation of the blind interference alignment (BIA) scheme is carried out in comparison with a diversity scheme such as maximum ratio combining (MRC).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Ship Classification from Multi-Spectral Satellite Imaging by Convolutional Neural Networks.\n \n \n \n \n\n\n \n Grasso, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ShipPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902822,\n  author = {R. Grasso},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Ship Classification from Multi-Spectral Satellite Imaging by Convolutional Neural Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This work investigates the use of convolutional neural networks for classifying ship targets from images acquired by the Multi-Spectral Instrument sensor on board Sentinel-2 satellites. An automatic procedure, requiring a minimum amount of supervision, is applied to extract labeled target images which are used for training. The data set consists of top of the atmosphere reflectance images in three visible channels and one near-infrared band. The performance of the classifier is evaluated by the receiver operating characteristic curve and the area under the curve statistics. The results show good classification performance with area under the curve greater than 0.95. Future work will be focused on investigating the impact of image atmospheric corrections and on comparing with other methods.},\n  keywords = {convolutional neural nets;geophysical equipment;geophysical image processing;image classification;remote sensing;ships;MultiSpectral satellite imaging;convolutional neural networks;MultiSpectral Instrument sensor;board Sentinel-2 satellites;automatic procedure;labeled target images;classifier;image atmospheric corrections;ship classification;Marine vehicles;Artificial intelligence;Training;Feature extraction;Data mining;Satellite broadcasting;Navigation;Machine Learning;Ship classification;Satellite imaging},\n  doi = {10.23919/EUSIPCO.2019.8902822},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532157.pdf},\n}\n\n
\n
\n\n\n
\n This work investigates the use of convolutional neural networks for classifying ship targets from images acquired by the Multi-Spectral Instrument sensor on board Sentinel-2 satellites. An automatic procedure, requiring a minimum amount of supervision, is applied to extract labeled target images which are used for training. The data set consists of top of the atmosphere reflectance images in three visible channels and one near-infrared band. The performance of the classifier is evaluated by the receiver operating characteristic curve and the area under the curve statistics. The results show good classification performance with area under the curve greater than 0.95. Future work will be focused on investigating the impact of image atmospheric corrections and on comparing with other methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Learning Causal Networks Topology From Streaming Graph Signals.\n \n \n \n \n\n\n \n Moscu, M.; Nassif, R.; Hua, F.; and Richard, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"LearningPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902826,\n  author = {M. Moscu and R. Nassif and F. Hua and C. Richard},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Learning Causal Networks Topology From Streaming Graph Signals},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Modern data analysis and processing tasks usually involve large sets of data structured by a graph. Typical examples include brain activity supported by neurons, data shared by users of social media, and traffic on transportation or energy networks. There are often settings where the graph is not readily available, and has to be estimated from data. This paper focuses on estimating a network structure capturing the dependencies among streaming graph signals in the form of a possibly directed, weighted adjacency matrix. Several works proposed centralized offline solutions to address this problem, without paying much attention to the distributed nature of networks. We start from a centralized setting and show how, by introducing a simple yet powerful data model, we can infer a graph structure from streaming data with a distributed online learning algorithm. Our algorithm is tested experimentally to illustrate its usefulness, and successfully compared to a centralized offline solution of the literature.},\n  keywords = {brain;data analysis;data structures;graph theory;learning (artificial intelligence);network structure;streaming graph signals;weighted adjacency matrix;centralized offline solution;centralized setting;graph structure;learning algorithm;causal networks topology;brain activity;social media;transportation;powerful data model;directed adjacency matrix;Topology;Signal processing;Network topology;Artificial neural networks;Signal processing algorithms;Europe;Data models;Network topology;graph signal processing;distributed learning;online learning},\n  doi = {10.23919/EUSIPCO.2019.8902826},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529919.pdf},\n}\n\n
\n
\n\n\n
\n Modern data analysis and processing tasks usually involve large sets of data structured by a graph. Typical examples include brain activity supported by neurons, data shared by users of social media, and traffic on transportation or energy networks. There are often settings where the graph is not readily available, and has to be estimated from data. This paper focuses on estimating a network structure capturing the dependencies among streaming graph signals in the form of a possibly directed, weighted adjacency matrix. Several works proposed centralized offline solutions to address this problem, without paying much attention to the distributed nature of networks. We start from a centralized setting and show how, by introducing a simple yet powerful data model, we can infer a graph structure from streaming data with a distributed online learning algorithm. Our algorithm is tested experimentally to illustrate its usefulness, and successfully compared to a centralized offline solution of the literature.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Model-based Optimization of a Low-dimensional Modulation Filter Bank for DRR and T60 Estimation.\n \n \n \n \n\n\n \n Ağcaer, S.; and Martin, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Model-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902827,\n  author = {S. Ağcaer and R. Martin},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Model-based Optimization of a Low-dimensional Modulation Filter Bank for DRR and T60 Estimation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Amplitude Modulation Spectrum (AMS) features can be implemented as a cascade of two filter banks whereas the filter bandwidths can be optimized for a particular application. In this work we train AMS-based features using a combination of a model-based optimization (MBO) approach and feature selection for full-band DRR and full-band T60 estimation. MBO replaces the computational complex data-based cost function by approximating a less complex surrogate model and thus reduces the time needed for training. We evaluate our approach on the publicly available ACE challenge corpus and achieve with only five features the best RMSE in the DRR estimation task using the single microphone configuration and upper mid-range performance for T60 estimation. The computational complexity of our algorithm is much lower than all other submitted algorithms.},\n  keywords = {amplitude modulation;channel bank filters;computational complexity;microphones;optimisation;filter bandwidths;AMS-based features;model-based optimization approach;MBO;feature selection;computational complex data-based cost function;complex surrogate model;DRR estimation task;amplitude modulation spectrum features;low-dimensional modulation filter bank;ACE challenge corpus;MBO approach;full-band T60 estimation;single microphone configuration;upper mid-range performance;Feature extraction;Estimation;Acoustics;Optimization;Bandwidth;Training;Task analysis;DRR estimation;T60 estimation;amplitude modulation spectrum;model-based optimization},\n  doi = {10.23919/EUSIPCO.2019.8902827},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533798.pdf},\n}\n\n
\n
\n\n\n
\n Amplitude Modulation Spectrum (AMS) features can be implemented as a cascade of two filter banks whereas the filter bandwidths can be optimized for a particular application. In this work we train AMS-based features using a combination of a model-based optimization (MBO) approach and feature selection for full-band DRR and full-band T60 estimation. MBO replaces the computational complex data-based cost function by approximating a less complex surrogate model and thus reduces the time needed for training. We evaluate our approach on the publicly available ACE challenge corpus and achieve with only five features the best RMSE in the DRR estimation task using the single microphone configuration and upper mid-range performance for T60 estimation. The computational complexity of our algorithm is much lower than all other submitted algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Adaptive Video Acquisition Scheme for Object Tracking.\n \n \n \n \n\n\n \n Banerjee, S.; Serra, J. G.; Chopp, H. H.; Cossairt, O.; and Katsaggelos, A. K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902829,\n  author = {S. Banerjee and J. G. Serra and H. H. Chopp and O. Cossairt and A. K. Katsaggelos},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {An Adaptive Video Acquisition Scheme for Object Tracking},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we propose an adaptive host-chip system for video acquisition constrained under a given bit rate to optimize object tracking performance. The chip is an imaging instrument with limited computational power consisting of a very high-resolution focal plane array (FPA) that transmits quadtree (QT)-segmented video frames to the host. The host has unlimited computational power for video analysis. We find the optimal QT decomposition to minimize a weighted rate distortion equation using the Viterbi algorithm. The weights are user-defined based on the class of objects to track. Faster R-CNN and a Kalman filter are used to detect and track the objects of interest respectively. We evaluate our architecture's performance based on the Multiple Object Tracking Accuracy (MOTA).},\n  keywords = {convolutional neural nets;focal planes;image filtering;image resolution;image segmentation;Kalman filters;object detection;object tracking;optimisation;quadtrees;rate distortion theory;recurrent neural nets;video signal processing;adaptive video acquisition scheme;adaptive host-chip system;imaging instrument;unlimited computational power;video analysis;optimal QT decomposition;Multiple Object Tracking Accuracy;object tracking performance optimization;high-resolution focal plane array;quadtree-segmented video frames;weighted rate distortion equation minimization;faster R-CNN;Kalman filter;object detection;MOTA;architecture performance;Viterbi algorithm;Distortion;Object tracking;Viterbi algorithm;Bandwidth;Image reconstruction;Detectors;host-chip architecture;Viterbi algorithm;optimal bit allocation;rate distortion;object tracking},\n  doi = {10.23919/EUSIPCO.2019.8902829},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534035.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose an adaptive host-chip system for video acquisition constrained under a given bit rate to optimize object tracking performance. The chip is an imaging instrument with limited computational power consisting of a very high-resolution focal plane array (FPA) that transmits quadtree (QT)-segmented video frames to the host. The host has unlimited computational power for video analysis. We find the optimal QT decomposition to minimize a weighted rate distortion equation using the Viterbi algorithm. The weights are user-defined based on the class of objects to track. Faster R-CNN and a Kalman filter are used to detect and track the objects of interest respectively. We evaluate our architecture's performance based on the Multiple Object Tracking Accuracy (MOTA).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Harmonic Networks with Limited Training Samples.\n \n \n \n \n\n\n \n Ulicny, M.; Krylov, V. A.; and Dahyot, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"HarmonicPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902831,\n  author = {M. Ulicny and V. A. Krylov and R. Dahyot},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Harmonic Networks with Limited Training Samples},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Convolutional neural networks (CNNs) are very popular nowadays for image processing. CNNs allow one to learn optimal filters in a (mostly) supervised machine learning context. However this typically requires abundant labelled training data to estimate the filter parameters. Alternative strategies have been deployed for reducing the number of parameters and / or filters to be learned and thus decrease overfitting. In the context of reverting to preset filters, we propose here a computationally efficient harmonic block that uses Discrete Cosine Transform (DCT) filters in CNNs. In this work we examine the performance of harmonic networks in limited training data scenario. We validate experimentally that its performance compares well against scattering networks that use wavelets as preset filters.},\n  keywords = {convolutional neural nets;discrete cosine transforms;filtering theory;image processing;supervised learning;convolutional neural networks;CNNs;image processing;optimal filters;supervised machine learning;harmonic networks;scattering networks;discrete cosine transform filters;Discrete cosine transforms;Harmonic analysis;Convolution;Signal processing algorithms;Wavelet transforms;Europe;Lapped Discrete Cosine Transform;harmonic network;convolutional filter;limited data},\n  doi = {10.23919/EUSIPCO.2019.8902831},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533913.pdf},\n}\n\n
\n
\n\n\n
\n Convolutional neural networks (CNNs) are very popular nowadays for image processing. CNNs allow one to learn optimal filters in a (mostly) supervised machine learning context. However this typically requires abundant labelled training data to estimate the filter parameters. Alternative strategies have been deployed for reducing the number of parameters and / or filters to be learned and thus decrease overfitting. In the context of reverting to preset filters, we propose here a computationally efficient harmonic block that uses Discrete Cosine Transform (DCT) filters in CNNs. In this work we examine the performance of harmonic networks in limited training data scenario. We validate experimentally that its performance compares well against scattering networks that use wavelets as preset filters.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Deep Learning Models for Denoising ECG Signals.\n \n \n \n \n\n\n \n Arsene, C. T. C.; Hankins, R.; and Yin, H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DeepPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902833,\n  author = {C. T. C. Arsene and R. Hankins and H. Yin},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Deep Learning Models for Denoising ECG Signals},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Effective and powerful methods for denoising electrocardiogram (ECG) signals are important for wearable sensors and devices. Deep Learning (DL) models have been used extensively in image processing and other domains with great successes but only very recently they have been used in processing ECG signals. This paper presents two DL models, together with a standard wavelet-based technique for denoising ECG signals. First, a Convolutional Neural Network (CNN) is depicted and applied to noisy ECG signals. It includes six convolutional layers, with subsequent pooling and a fully connected layer for regression. The second DL model is a Long Short-Term Memory (LSTM) model, consisting of two LSTM layers. A wavelet technique based on an empirical Bayesian method with a Cauchy prior is also applied for comparison with the DL models, which are trained and tested on two synthetic datasets and a dataset containing real ECG signals. The results demonstrate that while both DL models were capable of dealing with heavy and drifting noise, the CNN model was markedly superior to the LSTM model in terms of the Root Mean Squared (RMS) error, and the wavelet technique was suitable only for rejecting random noise.},\n  keywords = {Bayes methods;convolutional neural nets;electrocardiography;learning (artificial intelligence);medical signal processing;signal denoising;wavelet transforms;ECG signals denoising;convolutional neural network;DL model;long short-term memory model;CNN model;LSTM model;electrocardiogram signals;image processing;deep learning models;Electrocardiography;Convolution;Noise reduction;Testing;Wavelet transforms;Deep learning;Training;ECG signals;Deep Learning models;Convolutional Neural Networks;Long Short-Term Memory;Filtering;Denoising;Wavelets},\n  doi = {10.23919/EUSIPCO.2019.8902833},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570527102.pdf},\n}\n\n
\n
\n\n\n
\n Effective and powerful methods for denoising electrocardiogram (ECG) signals are important for wearable sensors and devices. Deep Learning (DL) models have been used extensively in image processing and other domains with great successes but only very recently they have been used in processing ECG signals. This paper presents two DL models, together with a standard wavelet-based technique for denoising ECG signals. First, a Convolutional Neural Network (CNN) is depicted and applied to noisy ECG signals. It includes six convolutional layers, with subsequent pooling and a fully connected layer for regression. The second DL model is a Long Short-Term Memory (LSTM) model, consisting of two LSTM layers. A wavelet technique based on an empirical Bayesian method with a Cauchy prior is also applied for comparison with the DL models, which are trained and tested on two synthetic datasets and a dataset containing real ECG signals. The results demonstrate that while both DL models were capable of dealing with heavy and drifting noise, the CNN model was markedly superior to the LSTM model in terms of the Root Mean Squared (RMS) error, and the wavelet technique was suitable only for rejecting random noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Time Modulated Array – A Database Approach.\n \n \n \n\n\n \n Euzière, J.; Guinvarc’h, R.; Sáenz, I. H.; Gillard, R.; and Uguen, B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-4, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902834,\n  author = {J. Euzière and R. Guinvarc’h and I. H. Sáenz and R. Gillard and B. Uguen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Time Modulated Array – A Database Approach},\n  year = {2019},\n  pages = {1-4},\n  abstract = {Time Modulated Array offers a simple way to synthesize radiation patterns with low sidelobe levels, in time-average, using an optimized strategy of switching ON and OFF the elements of the array. In this work we show that the complexity of the optimization strategy can be reduced by first shrinking the search space to a subset, or database, of useful solutions over which the optimization will be launched. This reduction is obtained by applying the constraints in a progressive and hierarchical order. In this way, this problem of exponential growth is reduced to a polynomial growth, with respect to the size of the antenna array. We present the case of designing the time modulated array for radar applications where constant directivity and ability of intereference rejection is needed.},\n  keywords = {antenna arrays;antenna radiation patterns;optimisation;polynomials;radar antennas;radar computing;radar interference;time modulated array;database approach;low sidelobe levels;time-average;optimized strategy;optimization strategy;antenna array;Optimization;Databases;Antenna radiation patterns;Linear antenna arrays;Radar antennas;Switches;TMA;radar;rejection;search space;database},\n  doi = {10.23919/EUSIPCO.2019.8902834},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Time Modulated Array offers a simple way to synthesize radiation patterns with low sidelobe levels, in time-average, using an optimized strategy of switching ON and OFF the elements of the array. In this work we show that the complexity of the optimization strategy can be reduced by first shrinking the search space to a subset, or database, of useful solutions over which the optimization will be launched. This reduction is obtained by applying the constraints in a progressive and hierarchical order. In this way, this problem of exponential growth is reduced to a polynomial growth, with respect to the size of the antenna array. We present the case of designing the time modulated array for radar applications where constant directivity and ability of intereference rejection is needed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Subsampling of Multivariate Time-Vertex Graph Signals.\n \n \n \n \n\n\n \n Humbert, P.; Oudre, L.; and Vayatis, N.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SubsamplingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902836,\n  author = {P. Humbert and L. Oudre and N. Vayatis},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Subsampling of Multivariate Time-Vertex Graph Signals},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This article presents a new approach for processing and subsampling multivariate time-vertex graph signals. The main idea is to model the relationships within each dimension (time, space, feature space) with different graphs and to merge these structures. A new technique based on tensor formalism is provided, which aims to identify the frequency support of the graph signal in order to preserve its content after subsampling. Results are provided on real EEG data for data interpolation and reconstruction.},\n  keywords = {electroencephalography;graph theory;interpolation;medical signal processing;mesh generation;signal reconstruction;signal sampling;tensors;subsampling;graph signal;multivariate time-vertex graph signals;data interpolation;real EEG data;data reconstruction;frequency support;feature space;tensor formalism;Tensors;Signal processing;Laplace equations;Brain modeling;Electroencephalography;Fourier transforms;Europe;Graph Signal Processing (GSP);sampling over graphs;tensors},\n  doi = {10.23919/EUSIPCO.2019.8902836},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533909.pdf},\n}\n\n
\n
\n\n\n
\n This article presents a new approach for processing and subsampling multivariate time-vertex graph signals. The main idea is to model the relationships within each dimension (time, space, feature space) with different graphs and to merge these structures. A new technique based on tensor formalism is provided, which aims to identify the frequency support of the graph signal in order to preserve its content after subsampling. Results are provided on real EEG data for data interpolation and reconstruction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Shuffling for understanding multifractality, application to asset price time series.\n \n \n \n \n\n\n \n Abry, P.; Malevergne, Y.; Wendt, H.; Senneret, M.; Jaffrès, L.; and Liaustrat, B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ShufflingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902837,\n  author = {P. Abry and Y. Malevergne and H. Wendt and M. Senneret and L. Jaffrès and B. Liaustrat},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Shuffling for understanding multifractality, application to asset price time series},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Multifractal analysis has become a standard signal processing tool successfully used to model scale-free temporal dynamics in many applications, very different in nature. This is notably the case in financial engineering where, after Mandelbrot's seminal contributions, multifractal models have been used since the late 90ies to describe temporal fluctuations in asset prices. However, what exact features of temporal dynamics are actually encoded in multifractal properties remains generally only partially understood. In finance, notably, multifractality is associated to the burstiness of the returns, yet its relation to trends (signs of the returns) or volatility (modulus of the returns) remains unclear. Comparing the estimated multifractal properties of well-controlled synthetic multifractal processes to those of surrogate data, obtained by applying random permutations (shuffling) either to signs, or to modulus, or to both, of increments of original data, permits to better understand what aspects of temporal dynamics are captured by multifractality. The same procedure applied to a large dataset of asset prices entering the composition of the Eurostoxx600 index permits to evidence a simple and solid relation between multifractality and volatility as well as a weaker and complicated relation to returns.},\n  keywords = {estimation theory;finance;fluctuations;fractals;pricing;signal processing;time series;asset price time series;standard signal processing tool;scale-free temporal dynamics;Mandelbrot's seminal contributions;temporal fluctuations;estimated multifractal properties;synthetic multifractal processes;random permutations;surrogate data;Fractals;Time series analysis;Brownian motion;Dynamics;Estimation;Signal processing;Discrete wavelet transforms;Multifractality;scale-free temporal dynamics;asset prices;finance;shuffling;surrogate},\n  doi = {10.23919/EUSIPCO.2019.8902837},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528786.pdf},\n}\n\n
\n
\n\n\n
\n Multifractal analysis has become a standard signal processing tool successfully used to model scale-free temporal dynamics in many applications, very different in nature. This is notably the case in financial engineering where, after Mandelbrot's seminal contributions, multifractal models have been used since the late 90ies to describe temporal fluctuations in asset prices. However, what exact features of temporal dynamics are actually encoded in multifractal properties remains generally only partially understood. In finance, notably, multifractality is associated to the burstiness of the returns, yet its relation to trends (signs of the returns) or volatility (modulus of the returns) remains unclear. Comparing the estimated multifractal properties of well-controlled synthetic multifractal processes to those of surrogate data, obtained by applying random permutations (shuffling) either to signs, or to modulus, or to both, of increments of original data, permits to better understand what aspects of temporal dynamics are captured by multifractality. The same procedure applied to a large dataset of asset prices entering the composition of the Eurostoxx600 index permits to evidence a simple and solid relation between multifractality and volatility as well as a weaker and complicated relation to returns.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compression of High-Dimensional Multispectral Image Time Series Using Tensor Decomposition Learning.\n \n \n \n \n\n\n \n Aidini, A.; Tsagkatakis, G.; and Tsakalides, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CompressionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902838,\n  author = {A. Aidini and G. Tsagkatakis and P. Tsakalides},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Compression of High-Dimensional Multispectral Image Time Series Using Tensor Decomposition Learning},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Multispectral imaging is widely used in many fields, such as in medicine and earth observation, as it provides valuable spatial, spectral and temporal information about the scene. It is of paramount importance that the large amount of images collected over time, and organized in multidimensional arrays known as tensors, be efficiently compressed in order to be stored or transmitted. In this paper, we present a compression algorithm which involves a training process and employs a symbol encoding dictionary. During training, we derive specially structured tensors from a given image time sequence using the CANDECOMP/PARAFAC (CP) decomposition. During runtime, every new image time sequence is quantized and encoded into a vector of coefficients corresponding to the learned CP decomposition. Experimental results on sequences of real satellite images demonstrate that we can efficiently handle higher-order tensors and obtain the decompressed data by composing the learned tensors by means of the received vector of coefficients, thus achieving a high compression ratio.},\n  keywords = {data compression;decomposition;geophysical image processing;image coding;image sequences;learning (artificial intelligence);tensors;time series;vectors;earth observation;valuable spatial information;spectral information;temporal information;multidimensional arrays;training process;symbol encoding dictionary;learned CP decomposition;higher-order tensors;satellite image sequence;tensor decomposition learning;high-dimensional multispectral image time series compression algorithm;CANDECOMP-PARAFAC decomposition;CP decomposition;image time sequence quantisation;image encoding;Tensors;Image coding;Time series analysis;Training;Matrix decomposition;Compression algorithms;Dictionaries;Compression;multispectral image time series;high-order tensors;CP decomposition;learning},\n  doi = {10.23919/EUSIPCO.2019.8902838},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533104.pdf},\n}\n\n
\n
\n\n\n
\n Multispectral imaging is widely used in many fields, such as in medicine and earth observation, as it provides valuable spatial, spectral and temporal information about the scene. It is of paramount importance that the large amount of images collected over time, and organized in multidimensional arrays known as tensors, be efficiently compressed in order to be stored or transmitted. In this paper, we present a compression algorithm which involves a training process and employs a symbol encoding dictionary. During training, we derive specially structured tensors from a given image time sequence using the CANDECOMP/PARAFAC (CP) decomposition. During runtime, every new image time sequence is quantized and encoded into a vector of coefficients corresponding to the learned CP decomposition. Experimental results on sequences of real satellite images demonstrate that we can efficiently handle higher-order tensors and obtain the decompressed data by composing the learned tensors by means of the received vector of coefficients, thus achieving a high compression ratio.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Audio-Based Epileptic Seizure Detection.\n \n \n \n \n\n\n \n Istiaq Ahsan, M. N.; Kertesz, C.; Mesaros, A.; Heittola, T.; Knight, A.; and Virtanen, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Audio-BasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902840,\n  author = {M. N. {Istiaq Ahsan} and C. Kertesz and A. Mesaros and T. Heittola and A. Knight and T. Virtanen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Audio-Based Epileptic Seizure Detection},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper investigates automatic epileptic seizure detection from audio recordings using convolutional neural networks. The labeling and analysis of seizure events are necessary in the medical field for patient monitoring, but the manual annotation by expert annotators is time-consuming and extremely monotonous. The proposed method treats all seizure vocalizations as a single target event class, and models the seizure detection problem in terms of detecting the target vs non-target classes. For detection, the method employs a convolutional neural network trained to detect the seizure events in short time segments, based on mel-energies as feature representation. Experiments carried out with different seizure types on 900 hours of audio recordings from 40 patients show that the proposed approach can detect seizures with over 80% accuracy, with a 13% false positive rate and a 22.8% false negative rate.},\n  keywords = {electroencephalography;feature extraction;medical disorders;medical signal detection;medical signal processing;neural nets;patient monitoring;signal classification;audio-based epileptic seizure detection;automatic epileptic seizure detection;audio recordings;convolutional neural network;seizure events;medical field;patient monitoring;manual annotation;expert annotators;seizure vocalizations;single target event class;seizure detection problem;nontarget classes;short time segments;seizure types;time 900.0 hour;Training;Feature extraction;Audio recording;Event detection;Monitoring;Video recording;Testing;Epileptic seizure detection;convolutional neural network (CNN);sound event detection;audio processing and analysis.},\n  doi = {10.23919/EUSIPCO.2019.8902840},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533392.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates automatic epileptic seizure detection from audio recordings using convolutional neural networks. The labeling and analysis of seizure events are necessary in the medical field for patient monitoring, but the manual annotation by expert annotators is time-consuming and extremely monotonous. The proposed method treats all seizure vocalizations as a single target event class, and models the seizure detection problem in terms of detecting the target vs non-target classes. For detection, the method employs a convolutional neural network trained to detect the seizure events in short time segments, based on mel-energies as feature representation. Experiments carried out with different seizure types on 900 hours of audio recordings from 40 patients show that the proposed approach can detect seizures with over 80% accuracy, with a 13% false positive rate and a 22.8% false negative rate.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n LTE Ranging Measurement Using Uplink Opportunistic Signals and the SAGE algorithm.\n \n \n \n \n\n\n \n Pin, A.; Rinaldo, R.; Tonello, A.; Marshall, C.; Driusso, M.; Biason, A.; and Torre, A. D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"LTEPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902842,\n  author = {A. Pin and R. Rinaldo and A. Tonello and C. Marshall and M. Driusso and A. Biason and A. D. Torre},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {LTE Ranging Measurement Using Uplink Opportunistic Signals and the SAGE algorithm},\n  year = {2019},\n  pages = {1-5},\n  abstract = {With the increase of services that need accurate location of the user, new techniques that cooperate with the Global Navigation Satellite System (GNSS) are necessary. GNSS suffers poor performance in indoor and dense urban environments, due to high signal attenuation and severe multipath propagation. The current release of the 3rd Generation Partnership Project(3GPP) LTE specification, supports the Uplink Time Difference Of Arrival (UTDOA) localization technique, which uses as reference the Sounding Reference Signal (SRS). Local Measurement Units (LMUs) devices use knowledge of the SRS to perform time difference measures. This paper studies the possibility of performing radio localization using a new UTDOA technique that exploits the uplink Demodulation Reference Signal (DM-RS) in 4G Long Term Evolution (LTE) cellular networks. We point out the advantages of our proposal and evaluate its feasibility by measuring the distance between two antennas using real DM-RS signals generated by an LTE module.},\n  keywords = {3G mobile communication;cellular radio;channel estimation;demodulation;direction-of-arrival estimation;indoor radio;Long Term Evolution;mobility management (mobile radio);radio direction-finding;radio receivers;radiofrequency interference;satellite navigation;time-of-arrival estimation;time difference measures;radio localization;UTDOA technique;uplink Demodulation Reference Signal;4G Long Term Evolution cellular networks;DM-RS signals;LTE module;LTE ranging Measurement;Uplink opportunistic signals;SAGE algorithm;Global Navigation Satellite System;GNSS;indoor environments;dense urban environments;high signal attenuation;severe multipath propagation;current release;Uplink Time Difference;Arrival localization technique;Sounding Reference Signal;SRS;Local Measurement Units devices;Uplink;Long Term Evolution;Global navigation satellite system;Antenna measurements;Data communication;3GPP;Time difference of arrival;LTE;Time Difference of Arrival;Ranging Measure;Opportunistic Positioning;Uplink DM-RS;SAGE algorithm},\n  doi = {10.23919/EUSIPCO.2019.8902842},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528956.pdf},\n}\n\n
\n
\n\n\n
\n With the increase of services that need accurate location of the user, new techniques that cooperate with the Global Navigation Satellite System (GNSS) are necessary. GNSS suffers poor performance in indoor and dense urban environments, due to high signal attenuation and severe multipath propagation. The current release of the 3rd Generation Partnership Project(3GPP) LTE specification, supports the Uplink Time Difference Of Arrival (UTDOA) localization technique, which uses as reference the Sounding Reference Signal (SRS). Local Measurement Units (LMUs) devices use knowledge of the SRS to perform time difference measures. This paper studies the possibility of performing radio localization using a new UTDOA technique that exploits the uplink Demodulation Reference Signal (DM-RS) in 4G Long Term Evolution (LTE) cellular networks. We point out the advantages of our proposal and evaluate its feasibility by measuring the distance between two antennas using real DM-RS signals generated by an LTE module.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Maximum-likelihood Detection of Impulsive Noise Support for Channel Parameter Estimation.\n \n \n \n \n\n\n \n Mestre, X.; Payaró, M.; and Shrestha, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Maximum-likelihoodPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902843,\n  author = {X. Mestre and M. Payaró and D. Shrestha},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Maximum-likelihood Detection of Impulsive Noise Support for Channel Parameter Estimation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we consider the problem of estimating channel parameters in the presence of impulsive noise (IN). To that end, two novel maximum-likelihood based IN support detection techniques are proposed for the cases where the IN is modeled to be a deterministic quantity or a random one. For the deterministic case, an exact closed-form expression for the distribution of the joint likelihood statistic is provided whereas, in the random case, an exact expression of its asymptotic distribution is derived. In both cases, the computed distribution of the likelihood statistic enables the joint estimation of the channel parameters and the detection of the IN support with guarantees on the false alarm probability for the samples that are estimated to be in the IN support set. The goodness of the proposed expressions is validated via numerical simulations.},\n  keywords = {approximation theory;channel estimation;maximum likelihood detection;maximum likelihood estimation;probability;channel parameters;support set;maximum-likelihood detection;impulsive noise support;channel parameter estimation;novel maximum-likelihood;support detection techniques;deterministic quantity;deterministic case;closed-form expression;joint likelihood statistic;random case;asymptotic distribution;computed distribution;joint estimation;Channel estimation;Maximum likelihood estimation;Matching pursuit algorithms;Signal processing;Maximum likelihood detection;Probability;Impulsive noise;support detection;maximumlikelihood estimation and detection.},\n  doi = {10.23919/EUSIPCO.2019.8902843},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533323.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we consider the problem of estimating channel parameters in the presence of impulsive noise (IN). To that end, two novel maximum-likelihood based IN support detection techniques are proposed for the cases where the IN is modeled to be a deterministic quantity or a random one. For the deterministic case, an exact closed-form expression for the distribution of the joint likelihood statistic is provided whereas, in the random case, an exact expression of its asymptotic distribution is derived. In both cases, the computed distribution of the likelihood statistic enables the joint estimation of the channel parameters and the detection of the IN support with guarantees on the false alarm probability for the samples that are estimated to be in the IN support set. The goodness of the proposed expressions is validated via numerical simulations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n One-Class Feature Learning Using Intra-Class Splitting.\n \n \n \n \n\n\n \n Schlachter, P.; Liao, Y.; and Yang, B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"One-ClassPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902848,\n  author = {P. Schlachter and Y. Liao and B. Yang},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {One-Class Feature Learning Using Intra-Class Splitting},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper proposes a novel generic one-class feature learning method based on intra-class splitting. In one-class classification, feature learning is challenging, because only samples of one class are available during training. Hence, state-of-the-art methods require reference multi-class datasets to pretrain feature extractors. In contrast, the proposed method realizes feature learning by splitting the given normal class into typical and atypical normal samples. By introducing closeness loss and dispersion loss, an intra-class joint training procedure between the two subsets after splitting enables the extraction of valuable features for one-class classification. Various experiments on three well-known image classification datasets demonstrate the effectiveness of our method which outperformed other baseline models in average.},\n  keywords = {feature extraction;image classification;learning (artificial intelligence);intra-class splitting;novel generic one-class feature;one-class classification;feature learning;reference multiclass datasets;feature extractors;given normal class;typical samples;atypical normal samples;intra-class joint training procedure;valuable features;Training;Dispersion;Feature extraction;Image reconstruction;Measurement;Learning systems;Distributed databases},\n  doi = {10.23919/EUSIPCO.2019.8902848},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533728.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a novel generic one-class feature learning method based on intra-class splitting. In one-class classification, feature learning is challenging, because only samples of one class are available during training. Hence, state-of-the-art methods require reference multi-class datasets to pretrain feature extractors. In contrast, the proposed method realizes feature learning by splitting the given normal class into typical and atypical normal samples. By introducing closeness loss and dispersion loss, an intra-class joint training procedure between the two subsets after splitting enables the extraction of valuable features for one-class classification. Various experiments on three well-known image classification datasets demonstrate the effectiveness of our method which outperformed other baseline models in average.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Prediction Residue Analysis in MPEG-2 Double Compressed Video Sequences.\n \n \n \n \n\n\n \n Vázquez-Padın, D.; and Pérez-González, F.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"PredictionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902850,\n  author = {D. Vázquez-Padın and F. Pérez-González},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Prediction Residue Analysis in MPEG-2 Double Compressed Video Sequences},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In video forensics, the study of the prediction residue across successive frames is key to verify the integrity of digital videos. Focusing on an MPEG-2 double compression scheme, we analyze how the variance of the prediction residue evolves during the second compression depending on the type of frame (either I or P) employed in the first encoding and exploring different compression strengths and deadzone widths for quantization. This analysis reveals that the width of the quantizer deadzones actually affects the performance of existing methods based on the Variation of Prediction Footprint (VPF) for double compression detection and Group Of Pictures (GOP) size estimation. The predicted behavior from the theoretical characterization of the prediction residue is confirmed through experimental results with real video sequences.},\n  keywords = {data compression;encoding;image forensics;image sequences;video coding;video forensics;successive frames;digital videos;encoding;compression strengths;deadzone widths;quantizer deadzones;Prediction Footprint;double compression detection;Prediction residue analysis;MPEG-2 double compressed video sequences;group of pictures size estimation;Encoding;Quantization (signal);Reactive power;Discrete cosine transforms;Transform coding;Streaming media;Distortion;Prediction residue analysis;double compression detection;GOP size estimation;video forensics;MPEG-2},\n  doi = {10.23919/EUSIPCO.2019.8902850},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533825.pdf},\n}\n\n
\n
\n\n\n
\n In video forensics, the study of the prediction residue across successive frames is key to verify the integrity of digital videos. Focusing on an MPEG-2 double compression scheme, we analyze how the variance of the prediction residue evolves during the second compression depending on the type of frame (either I or P) employed in the first encoding and exploring different compression strengths and deadzone widths for quantization. This analysis reveals that the width of the quantizer deadzones actually affects the performance of existing methods based on the Variation of Prediction Footprint (VPF) for double compression detection and Group Of Pictures (GOP) size estimation. The predicted behavior from the theoretical characterization of the prediction residue is confirmed through experimental results with real video sequences.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reparameterization Gradient Message Passing.\n \n \n \n \n\n\n \n Akbayrak, S.; and d. Vries, B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ReparameterizationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902930,\n  author = {S. Akbayrak and B. d. Vries},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Reparameterization Gradient Message Passing},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper we consider efficient message passing based inference in a factor graph representation of a probabilistic model. Current message passing methods, such as belief propagation, variational message passing or expectation propagation, rely on analytically pre-computed message update rules. In practical models, it is often not feasible to analytically derive all update rules for all factors in the graph and as a result, efficient message passing-based inference cannot proceed. In related research on (non-message passing-based) inference, a “reparameterization trick” has lead to a considerable extension of the class of models for which automated inference is possible. In this paper, we introduce Reparameterization Gradient Message Passing (RGMP), which is a new message passing method based on the reparameterization gradient. In most models, the large majority of messages can be analytically derived and we resort to RGMP only when necessary. We will argue that this kind of hybrid message passing leads naturally to low-variance gradients.},\n  keywords = {graph theory;inference mechanisms;message passing;probability;Reparameterization Gradient Message Passing;factor graph representation;probabilistic model;current message passing methods;variational message passing;pre-computed message update rules;efficient message passing-based inference;message passing method;hybrid message passing;Message passing;Computational modeling;Random variables;Analytical models;Europe;Signal processing;Probabilistic logic},\n  doi = {10.23919/EUSIPCO.2019.8902930},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532623.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we consider efficient message passing based inference in a factor graph representation of a probabilistic model. Current message passing methods, such as belief propagation, variational message passing or expectation propagation, rely on analytically pre-computed message update rules. In practical models, it is often not feasible to analytically derive all update rules for all factors in the graph and as a result, efficient message passing-based inference cannot proceed. In related research on (non-message passing-based) inference, a “reparameterization trick” has lead to a considerable extension of the class of models for which automated inference is possible. In this paper, we introduce Reparameterization Gradient Message Passing (RGMP), which is a new message passing method based on the reparameterization gradient. In most models, the large majority of messages can be analytically derived and we resort to RGMP only when necessary. We will argue that this kind of hybrid message passing leads naturally to low-variance gradients.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n On Multivariate Non-Gaussian Scale Invariance: Fractional Lévy Processes And Wavelet Estimation.\n \n \n \n\n\n \n Boniece, B. C.; Didier, G.; Wendt, H.; and Abry, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902931,\n  author = {B. C. Boniece and G. Didier and H. Wendt and P. Abry},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {On Multivariate Non-Gaussian Scale Invariance: Fractional Lévy Processes And Wavelet Estimation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In the modern world of “Big Data,” dynamic signals are often multivariate and characterized by joint scale-free dynamics (self-similarity) and non-Gaussianity. In this paper, we examine the performance of joint wavelet eigenanalysis estimation for the Hurst parameters (scaling exponents) of non-Gaussian multivariate processes. We propose a new process called operator fractional Lévy motion (ofLm) as a Lévy-type model for non-Gaussian multivariate self-similarity. Based on large size Monte Carlo simulations of bivariate ofLm with a combination of Gaussian and non-Gaussian marginals, the estimation performance for Hurst parameters is shown to be satisfactory over finite samples.},\n  keywords = {eigenvalues and eigenfunctions;Gaussian distribution;Gaussian processes;Monte Carlo methods;wavelet transforms;multivariate nonGaussian scale invariance;fractional Lévy processes;wavelet estimation;Big Data;joint scale-free dynamics;joint wavelet;Hurst parameters;scaling exponents;nonGaussian multivariate processes;operator fractional Lévy motion;Lévy-type model;nonGaussian multivariate self-similarity;nonGaussian marginals;estimation performance;Estimation;Eigenvalues and eigenfunctions;Wavelet transforms;Signal processing;Monte Carlo methods;Covariance matrices;multivariate self-similarity;non-Gaussian process;Lévy processes;wavelets},\n  doi = {10.23919/EUSIPCO.2019.8902931},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n In the modern world of “Big Data,” dynamic signals are often multivariate and characterized by joint scale-free dynamics (self-similarity) and non-Gaussianity. In this paper, we examine the performance of joint wavelet eigenanalysis estimation for the Hurst parameters (scaling exponents) of non-Gaussian multivariate processes. We propose a new process called operator fractional Lévy motion (ofLm) as a Lévy-type model for non-Gaussian multivariate self-similarity. Based on large size Monte Carlo simulations of bivariate ofLm with a combination of Gaussian and non-Gaussian marginals, the estimation performance for Hurst parameters is shown to be satisfactory over finite samples.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Referenceless Performance Evaluation of Audio Source Separation using Deep Neural Networks.\n \n \n \n \n\n\n \n Grais, E. M.; Wierstorf, H.; Ward, D.; Mason, R.; and Plumbley, M. D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ReferencelessPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902932,\n  author = {E. M. Grais and H. Wierstorf and D. Ward and R. Mason and M. D. Plumbley},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Referenceless Performance Evaluation of Audio Source Separation using Deep Neural Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Current performance evaluation for audio source separation depends on comparing the processed or separated signals with reference signals. Therefore, common performance evaluation toolkits are not applicable to real-world situations where the ground truth audio is unavailable. In this paper, we propose a performance evaluation technique that does not require reference signals in order to assess separation quality. The proposed technique uses a deep neural network (DNN) to map the processed audio into its quality score. Our experiment results show that the DNN is capable of predicting the sources-to-artifacts ratio from the blind source separation evaluation toolkit [1] for singing-voice separation without the need for reference signals.},\n  keywords = {audio signal processing;blind source separation;neural nets;performance evaluation;referenceless performance evaluation;audio source separation;deep neural network;current performance evaluation;processed separated signals;reference signals;common performance evaluation toolkits;ground truth audio;performance evaluation technique;separation quality;processed audio;sources-to-artifacts ratio;blind source separation evaluation toolkit;singing-voice separation;Source separation;Signal processing algorithms;Prediction algorithms;Neural networks;Measurement;Training;Feature extraction;Performance evaluation;deep learning;audio source separation;BSS-Eval sources-to-artifacts ratio.},\n  doi = {10.23919/EUSIPCO.2019.8902932},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530422.pdf},\n}\n\n
\n
\n\n\n
\n Current performance evaluation for audio source separation depends on comparing the processed or separated signals with reference signals. Therefore, common performance evaluation toolkits are not applicable to real-world situations where the ground truth audio is unavailable. In this paper, we propose a performance evaluation technique that does not require reference signals in order to assess separation quality. The proposed technique uses a deep neural network (DNN) to map the processed audio into its quality score. Our experiment results show that the DNN is capable of predicting the sources-to-artifacts ratio from the blind source separation evaluation toolkit [1] for singing-voice separation without the need for reference signals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Deep Reinforcement Learning for Autonomous Model-Free Navigation with Partial Observability.\n \n \n \n \n\n\n \n Tapia, D.; Parras, J.; and Zazo, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DeepPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902933,\n  author = {D. Tapia and J. Parras and S. Zazo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Deep Reinforcement Learning for Autonomous Model-Free Navigation with Partial Observability},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Navigation is known to be a hard Sequential Decision-Making problem that attracts the attention of a large number of fields like Artificial Intelligence or Robotics. In this work, we approach the problem of partially observable navigation with a reactive system trained by model-free Reinforcement Learning. The advantages of this learned approach include reducing the engineering effort at the cost of more computing power during training. We designed an agent and an environment with a focus on being able to navigate independently of the map. We use well-tested general Reinforcement Learning algorithms without any hyper-parameter tuning and achieve promising results. Our results show that several general purpose Reinforcement Learning algorithms can reach the target in our navigation setup more than 85% of the episodes. Hence, these algorithms may provide a significant step forward towards autonomous navigation systems.},\n  keywords = {decision making;learning (artificial intelligence);mobile robots;path planning;partial observability;sequential decision-making problem;artificial intelligence;reactive system;model-free reinforcement learning;hyper-parameter tuning;autonomous navigation systems;autonomous model-free navigation;general reinforcement learning algorithms;Navigation;Task analysis;Reinforcement learning;Signal processing algorithms;Robots;Computational modeling;Markov processes;Navigation;Reinforcement Learning;Robotics;Artificial Intelligence;Partially Observable Markov Decision Processes},\n  doi = {10.23919/EUSIPCO.2019.8902933},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533736.pdf},\n}\n\n
\n
\n\n\n
\n Navigation is known to be a hard Sequential Decision-Making problem that attracts the attention of a large number of fields like Artificial Intelligence or Robotics. In this work, we approach the problem of partially observable navigation with a reactive system trained by model-free Reinforcement Learning. The advantages of this learned approach include reducing the engineering effort at the cost of more computing power during training. We designed an agent and an environment with a focus on being able to navigate independently of the map. We use well-tested general Reinforcement Learning algorithms without any hyper-parameter tuning and achieve promising results. Our results show that several general purpose Reinforcement Learning algorithms can reach the target in our navigation setup more than 85% of the episodes. Hence, these algorithms may provide a significant step forward towards autonomous navigation systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Training Generative Adversarial Networks With Weights.\n \n \n \n \n\n\n \n Pantazis, Y.; Paul, D.; Fasoulakis, M.; and Stylianou, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TrainingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902934,\n  author = {Y. Pantazis and D. Paul and M. Fasoulakis and Y. Stylianou},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Training Generative Adversarial Networks With Weights},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The impressive success of Generative Adversarial Networks (GANs) is often overshadowed by the difficulties in their training. Despite the continuous efforts and improvements, there are still open issues regarding their convergence properties. In this paper, we propose a simple training variation where suitable weights are defined and assist the training of the Generator. We provide theoretical arguments which indicate that the proposed algorithm is better than the baseline algorithm in the sense of creating a stronger Generator at each iteration. Performance results showed that the new algorithm is more accurate and converges faster in both synthetic and image datasets resulting in improvements ranging between 5% and 50%.},\n  keywords = {learning (artificial intelligence);neural nets;generative adversarial networks;convergence properties;baseline algorithm;Gallium nitride;Generators;Training;Signal processing algorithms;Generative adversarial networks;Signal processing;Games;Generative adversarial networks;multiplicative weight update method;training algorithm.},\n  doi = {10.23919/EUSIPCO.2019.8902934},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533474.pdf},\n}\n\n
\n
\n\n\n
\n The impressive success of Generative Adversarial Networks (GANs) is often overshadowed by the difficulties in their training. Despite the continuous efforts and improvements, there are still open issues regarding their convergence properties. In this paper, we propose a simple training variation where suitable weights are defined and assist the training of the Generator. We provide theoretical arguments which indicate that the proposed algorithm is better than the baseline algorithm in the sense of creating a stronger Generator at each iteration. Performance results showed that the new algorithm is more accurate and converges faster in both synthetic and image datasets resulting in improvements ranging between 5% and 50%.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Prediction of FDG uptake in Lung Tumors from CT Images Using Generative Adversarial Networks.\n \n \n \n \n\n\n \n Liebgott, A.; Hindere, D.; Armanious, K.; Bartler, A.; Nikolaou, K.; Gatidis, S.; and Yangl, B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"PredictionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902935,\n  author = {A. Liebgott and D. Hindere and K. Armanious and A. Bartler and K. Nikolaou and S. Gatidis and B. Yangl},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Prediction of FDG uptake in Lung Tumors from CT Images Using Generative Adversarial Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In modern medicine, combined PET-CT is a commonly-used tool in clinical diagnostics, which is especially important in oncology for staging or treatment planning. Variations in FDG uptake visible in a PET image, which indicate variances in metabolic activity, are not visually recognizable within a CT scan from the same region, making both imaging modalities necessary for diagnosis and exposing the patient to a high amount of radiation. In this study, we investigate the possibility of using generative adversarial networks (GANs) to synthesize a PET image from a CT scan to predict metabolic activity.},\n  keywords = {cancer;computerised tomography;lung;medical image processing;positron emission tomography;tumours;FDG uptake;lung tumors;CT images;generative adversarial networks;PET-CT scan;oncology;PET image;metabolic activity;Computed tomography;Tumors;Gallium nitride;Generative adversarial networks;Training;Positron emission tomography;Visualization;PET/CT;lung cancer;FDG uptake;machine learning;GANs},\n  doi = {10.23919/EUSIPCO.2019.8902935},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533496.pdf},\n}\n\n
\n
\n\n\n
\n In modern medicine, combined PET-CT is a commonly-used tool in clinical diagnostics, which is especially important in oncology for staging or treatment planning. Variations in FDG uptake visible in a PET image, which indicate variances in metabolic activity, are not visually recognizable within a CT scan from the same region, making both imaging modalities necessary for diagnosis and exposing the patient to a high amount of radiation. In this study, we investigate the possibility of using generative adversarial networks (GANs) to synthesize a PET image from a CT scan to predict metabolic activity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Novel Recurrent Neural Network Architecture for Classification of Atrial Fibrillation Using Single-lead ECG.\n \n \n \n \n\n\n \n Banerjee, R.; Ghose, A.; and Khandelwal, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902936,\n  author = {R. Banerjee and A. Ghose and S. Khandelwal},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Novel Recurrent Neural Network Architecture for Classification of Atrial Fibrillation Using Single-lead ECG},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Atrial Fibrillation (AF) is a type of abnormal heart rhythm which may lead to a stroke or cardiac arrest. In spite of numerous research works, developing an automatic mechanism for accurate detection of AF remains a popular yet unsolved problem. In this paper, we propose a deep neural network architecture for classification of AF using single-lead Electrocardiogram (ECG) signals of short duration. We define a novel Recurrent Neural Network (RNN) structure, comprising two Long-Short Term Memory (LSTM) networks for temporal analysis of RR intervals and PR intervals in an ECG recording. Output states of the two LSTMs are merged at the dense layer along with a set of hand-crafted statistical features, related to the measurement of heart rate variability (HRV). The proposed architecture is proven on the open access PhysioNet Challenge 2017 dataset, containing more than 8500 single-lead ECG recordings. Results show that our methodology yields sensitivity of 0.93, specificity of 0.98 and F1-score of 0.89 in classifying AF, which is better than the existing accuracy scores, reported on the dataset.},\n  keywords = {bioelectric potentials;electrocardiography;medical disorders;medical signal detection;medical signal processing;recurrent neural nets;signal classification;recurrent neural network architecture;atrial fibrillation classification;abnormal heart rhythm;stroke;deep neural network architecture;single-lead electrocardiogram signals;recurrent neural network structure;long-short term memory networks;temporal analysis;RR intervals;PR intervals;hand-crafted statistical features;heart rate variability;open access PhysioNet Challenge 2017 dataset;single-lead ECG recordings;cardiac arrest;Electrocardiography;Heart rate variability;Time series analysis;Signal processing;Recurrent neural networks;Training;Atrial Fibrillation;Electrocardiogram;Long-Short Term Memory;Classification},\n  doi = {10.23919/EUSIPCO.2019.8902936},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529299.pdf},\n}\n\n
\n
\n\n\n
\n Atrial Fibrillation (AF) is a type of abnormal heart rhythm which may lead to a stroke or cardiac arrest. In spite of numerous research works, developing an automatic mechanism for accurate detection of AF remains a popular yet unsolved problem. In this paper, we propose a deep neural network architecture for classification of AF using single-lead Electrocardiogram (ECG) signals of short duration. We define a novel Recurrent Neural Network (RNN) structure, comprising two Long-Short Term Memory (LSTM) networks for temporal analysis of RR intervals and PR intervals in an ECG recording. Output states of the two LSTMs are merged at the dense layer along with a set of hand-crafted statistical features, related to the measurement of heart rate variability (HRV). The proposed architecture is proven on the open access PhysioNet Challenge 2017 dataset, containing more than 8500 single-lead ECG recordings. Results show that our methodology yields sensitivity of 0.93, specificity of 0.98 and F1-score of 0.89 in classifying AF, which is better than the existing accuracy scores, reported on the dataset.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rank-one Tensor Approximation with Beta-divergence Cost Functions.\n \n \n \n \n\n\n \n Vandecappelle, M.; Vervliet, N.; and De Lathauwer, L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Rank-onePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902937,\n  author = {M. Vandecappelle and N. Vervliet and L. {De Lathauwer}},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Rank-one Tensor Approximation with Beta-divergence Cost Functions},\n  year = {2019},\n  pages = {1-5},\n  abstract = {β-divergence cost functions generalize three popular cost functions for low-rank tensor approximation by interpolating between them: the least-squares (LS) distance, the Kullback-Leibler (KL) divergence and the Itakura-Saito (IS) divergence. For certain types of data and specific noise distributions, beta-divergence cost functions can lead to more meaningful low-rank approximations than those obtained with the LS cost function. Unfortunately, much of the low-rank structure that is heavily exploited in existing second-order LS methods, is no longer exploitable when moving to general β-divergences. In this paper, we show that, unlike in the general rank-R case, rank-1 structure can still be exploited. We therefore propose an efficient method that uses second-order information to compute nonnegative rank-1 approximations of tensors for general β-divergence cost functions.},\n  keywords = {approximation theory;least squares approximations;tensors;rank-one tensor approximation;beta-divergence cost functions;low-rank tensor approximation;least-squares distance;Kullback-Leibler divergence;β-divergence cost functions;Tensors;Cost function;Manganese;Approximation algorithms;Jacobian matrices;Computational modeling;tensors;β-divergences;low-rank;CPD;BSS},\n  doi = {10.23919/EUSIPCO.2019.8902937},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530958.pdf},\n}\n\n
\n
\n\n\n
\n β-divergence cost functions generalize three popular cost functions for low-rank tensor approximation by interpolating between them: the least-squares (LS) distance, the Kullback-Leibler (KL) divergence and the Itakura-Saito (IS) divergence. For certain types of data and specific noise distributions, beta-divergence cost functions can lead to more meaningful low-rank approximations than those obtained with the LS cost function. Unfortunately, much of the low-rank structure that is heavily exploited in existing second-order LS methods, is no longer exploitable when moving to general β-divergences. In this paper, we show that, unlike in the general rank-R case, rank-1 structure can still be exploited. We therefore propose an efficient method that uses second-order information to compute nonnegative rank-1 approximations of tensors for general β-divergence cost functions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian Sequential Joint Signal Detection and Signal-to-Noise Ratio Estimation.\n \n \n \n \n\n\n \n Reinhard, D.; Fauß, M.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902938,\n  author = {D. Reinhard and M. Fauß and A. M. Zoubir},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian Sequential Joint Signal Detection and Signal-to-Noise Ratio Estimation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Jointly detecting a signal in noise and, in case a signal is present, estimating the Signal-to-Noise Ratio (SNR) is investigated in a sequential setup. The sequential test is designed such that it achieves desired error probabilities and Mean-Squared Errors (MSEs), while the expected number of samples is minimized. This problem is first converted to an unconstrained problem, which is then reduced to an optimal stopping problem. The solution, which is obtained by means of dynamic programming, is characterized by a non-linear Bellman equation. A gradient ascent approach is then presented to select the cost coefficients of the Bellman equation such that the desired error probabilities and MSEs are achieved. A numerical example concludes the work.},\n  keywords = {Bayes methods;dynamic programming;error statistics;estimation theory;gradient methods;mean square error methods;minimisation;nonlinear equations;probability;signal detection;MSEs;unconstrained problem;optimal stopping problem;nonlinear Bellman equation;error probabilities;signal-to-noise ratio estimation;sequential setup;sequential test;Mean-Squared Errors;Bayesian sequential joint signal detection;SNR;dynamic programming;gradient ascent approach;Bellman equation;cost coefficients;Estimation;Signal to noise ratio;Bayes methods;Error probability;Random variables;Cost function;sequential analysis;joint detection and estimation;signal-to-noise ratio estimation;Monte Carlo;optimal stopping},\n  doi = {10.23919/EUSIPCO.2019.8902938},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532715.pdf},\n}\n\n
\n
\n\n\n
\n Jointly detecting a signal in noise and, in case a signal is present, estimating the Signal-to-Noise Ratio (SNR) is investigated in a sequential setup. The sequential test is designed such that it achieves desired error probabilities and Mean-Squared Errors (MSEs), while the expected number of samples is minimized. This problem is first converted to an unconstrained problem, which is then reduced to an optimal stopping problem. The solution, which is obtained by means of dynamic programming, is characterized by a non-linear Bellman equation. A gradient ascent approach is then presented to select the cost coefficients of the Bellman equation such that the desired error probabilities and MSEs are achieved. A numerical example concludes the work.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Forward Looking GPR-Based Landmine Detection Using a Robust Likelihood Ratio Test.\n \n \n \n \n\n\n \n Pambudi, A. D.; Fauß, M.; Ahmad, F.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ForwardPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902939,\n  author = {A. D. Pambudi and M. Fauß and F. Ahmad and A. M. Zoubir},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Forward Looking GPR-Based Landmine Detection Using a Robust Likelihood Ratio Test},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We propose a robust likelihood ratio test to detect landmines using forward-looking ground penetrating radar. Instead of modeling the distributions of the target and clutter returns with parametric families, we use a kernel density estimator to construct a band of feasible probability densities under each hypothesis. The likelihood ratio test is then devised based on the least favorable densities within the bands. This detector is designed to maximize the worst-case performance over all the feasible density pairs and, hence, does not require strong assumptions about the clutter and noise distributions.},\n  keywords = {ground penetrating radar;landmine detection;probability;radar clutter;ground penetrating radar;clutter returns;kernel density estimator;feasible probability densities;robust likelihood ratio test;landmines;forward looking GPR-based landmine detection;Landmine detection;Clutter;Light rail systems;Tomography;Apertures;Signal processing;Plastics;Forward looking ground penetrating radar;landmine detection;robust probability ratio test;band model},\n  doi = {10.23919/EUSIPCO.2019.8902939},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533650.pdf},\n}\n\n
\n
\n\n\n
\n We propose a robust likelihood ratio test to detect landmines using forward-looking ground penetrating radar. Instead of modeling the distributions of the target and clutter returns with parametric families, we use a kernel density estimator to construct a band of feasible probability densities under each hypothesis. The likelihood ratio test is then devised based on the least favorable densities within the bands. This detector is designed to maximize the worst-case performance over all the feasible density pairs and, hence, does not require strong assumptions about the clutter and noise distributions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Discrete-Valued Vector Reconstruction by Optimization with Sum of Sparse Regularizers.\n \n \n \n \n\n\n \n Hayakawa, R.; and Hayashi, K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Discrete-ValuedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902940,\n  author = {R. Hayakawa and K. Hayashi},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Discrete-Valued Vector Reconstruction by Optimization with Sum of Sparse Regularizers},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we propose a possibly nonconvex optimization problem to reconstruct a discrete-valued vector from its underdetermined linear measurements. The proposed sum of sparse regularizers (SSR) optimization uses the sum of sparse regularizers as a regularizer for the discrete-valued vector. We also propose two proximal splitting algorithms for the SSR optimization problem on the basis of alternating direction method of multipliers (ADMM) and primal-dual splitting (PDS). The ADMM based algorithm can achieve faster convergence, whereas the PDS based algorithm does not require the computation of any inverse matrix. Moreover, we extend the ADMM based approach for the reconstruction of complex discrete-valued vectors. Note that the proposed approach can use any sparse regularizer as long as its proximity operator can be efficiently computed. Simulation results show that the proposed algorithms with nonconvex regularizers can achieve good reconstruction performance.},\n  keywords = {concave programming;image reconstruction;iterative methods;nonconvex regularizers;good reconstruction performance;discrete-valued vector reconstruction;sparse regularizer;possibly nonconvex optimization problem;underdetermined linear measurements;sparse regularizers optimization;proximal splitting algorithms;SSR optimization problem;primal-dual splitting;ADMM based approach;Optimization;Convex functions;Signal processing algorithms;Handheld computers;Computational complexity;Convergence;Image reconstruction;Discrete-valued vector reconstruction;nonconvex optimization;alternating direction method of multipliers;primal-dual splitting},\n  doi = {10.23919/EUSIPCO.2019.8902940},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530861.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a possibly nonconvex optimization problem to reconstruct a discrete-valued vector from its underdetermined linear measurements. The proposed sum of sparse regularizers (SSR) optimization uses the sum of sparse regularizers as a regularizer for the discrete-valued vector. We also propose two proximal splitting algorithms for the SSR optimization problem on the basis of alternating direction method of multipliers (ADMM) and primal-dual splitting (PDS). The ADMM based algorithm can achieve faster convergence, whereas the PDS based algorithm does not require the computation of any inverse matrix. Moreover, we extend the ADMM based approach for the reconstruction of complex discrete-valued vectors. Note that the proposed approach can use any sparse regularizer as long as its proximity operator can be efficiently computed. Simulation results show that the proposed algorithms with nonconvex regularizers can achieve good reconstruction performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sounding Industry: Challenges and Datasets for Industrial Sound Analysis.\n \n \n \n \n\n\n \n Grollmisch, S.; Abeβer, J.; Liebetrau, J.; and Lukashevich, H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SoundingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902941,\n  author = {S. Grollmisch and J. Abeβer and J. Liebetrau and H. Lukashevich},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Sounding Industry: Challenges and Datasets for Industrial Sound Analysis},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The ongoing process of automation in production lines increases the requirements for robust and reliable quality control. Acoustic quality control can play a major part in advanced quality control systems since several types of faults such as changes in machine conditions can be heard by experienced machine operators but can hardly be detected otherwise. To this day, acoustic detection systems using airborne sounds struggle due to the highly complex noise scenarios inside factories. Machine learning systems are theoretically able to cope with these conditions. However, recent advancements in the field of Industrial Sound Analysis (ISA) are sparse compared to related research fields like Music Information Retrieval (MIR) or Acoustic Event Detection (AED). One main reason is the lack of freely available datasets since most of the data is very sensitive for companies. Therefore, three novel datasets for ISA with different application fields were recorded and published along with this paper: detection of the operational state of an electric engine, detection of the surface of rolling metal balls, and detection of different bulk materials. For each dataset, neural network based baseline systems were evaluated. The results show that such systems obtain high classification accuracies over all datasets in many of the subtasks which demonstrates the feasibility of audio-based analysis of industrial analysis scenarios. However, the baseline systems remain highly sensitive to changes in the recording setup, which leaves a lot of room for improvement. The main goal of this paper is to stimulate further research in the field of ISA.},\n  keywords = {acoustic signal detection;acoustic signal processing;audio signal processing;factory automation;learning (artificial intelligence);neural nets;production engineering computing;quality control;signal classification;ISA;freely available datasets;neural network based baseline systems;audio-based analysis;sounding industry;Industrial Sound Analysis;production lines;robust quality control;Acoustic quality control;advanced quality control systems;machine conditions;experienced machine operators;acoustic detection systems;airborne sounds struggle;highly complex noise scenarios;machine learning systems;Engines;Metals;Microphones;Electron tubes;Noise measurement;Acoustics;Task analysis;industrial sound analysis;machine learning;audio;signal processing;deep learning;neural networks;datasets},\n  doi = {10.23919/EUSIPCO.2019.8902941},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570526697.pdf},\n}\n\n
\n
\n\n\n
\n The ongoing process of automation in production lines increases the requirements for robust and reliable quality control. Acoustic quality control can play a major part in advanced quality control systems since several types of faults such as changes in machine conditions can be heard by experienced machine operators but can hardly be detected otherwise. To this day, acoustic detection systems using airborne sounds struggle due to the highly complex noise scenarios inside factories. Machine learning systems are theoretically able to cope with these conditions. However, recent advancements in the field of Industrial Sound Analysis (ISA) are sparse compared to related research fields like Music Information Retrieval (MIR) or Acoustic Event Detection (AED). One main reason is the lack of freely available datasets since most of the data is very sensitive for companies. Therefore, three novel datasets for ISA with different application fields were recorded and published along with this paper: detection of the operational state of an electric engine, detection of the surface of rolling metal balls, and detection of different bulk materials. For each dataset, neural network based baseline systems were evaluated. The results show that such systems obtain high classification accuracies over all datasets in many of the subtasks which demonstrates the feasibility of audio-based analysis of industrial analysis scenarios. However, the baseline systems remain highly sensitive to changes in the recording setup, which leaves a lot of room for improvement. The main goal of this paper is to stimulate further research in the field of ISA.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reduced Complexity Maximum Likelihood Detector for DFT-s-SEFDM Systems.\n \n \n \n \n\n\n \n Wu, T.; and Grammenos, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ReducedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902942,\n  author = {T. Wu and R. Grammenos},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Reduced Complexity Maximum Likelihood Detector for DFT-s-SEFDM Systems},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we report on the design of a Complexity-Reduced Maximum Likelihood (CRML) detector for DFT-spread Spectrally Efficient Frequency Division Multiplexing (DFT-s-SEFDM) systems. DFT-s-SEFDM systems are similar to DFT-spread Orthogonal Frequency Division Multiplexing (DFT-s-OFDM) systems, yet offer improved spectral efficiency. Simulation results demonstrate that the CRML detector can achieve the same bit error rate (BER) performance as the ML detector in DFT-s-SEFDM systems at reduced computational complexity. Specifically, compared to a conventional ML detector, it is shown that CRML can decrease the search region by up to 2M times where M denotes the constellation cardinality. Depending on parameter configuration, CRML can offer up to two orders of magnitude improvement in execution runtime performance. CRML is best-suited to applications with small system sizes, for example, in narrowband Internet of Things (NB-IoT) networks.},\n  keywords = {communication complexity;discrete Fourier transforms;error statistics;frequency division multiplexing;maximum likelihood detection;DFT-s-SEFDM systems;DFT-s-OFDM systems;CRML detector;reduced computational complexity;ML detector;DFT-spread orthogonal frequency division multiplexing systems;DFT-spread spectrally efficient frequency division multiplexing systems;complexity-reduced maximum likelihood detector;bit error rate performance;BER performance;narrowband Internet of Things networks;NB-IoT networks;Detectors;Peak to average power ratio;Discrete Fourier transforms;Complexity theory;Bit error rate;Modulation;SEFDM;DFT-s-SEFDM;BER;maximum-likelihood (ML);reduced complexity detector;NB-IoT},\n  doi = {10.23919/EUSIPCO.2019.8902942},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570524225.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we report on the design of a Complexity-Reduced Maximum Likelihood (CRML) detector for DFT-spread Spectrally Efficient Frequency Division Multiplexing (DFT-s-SEFDM) systems. DFT-s-SEFDM systems are similar to DFT-spread Orthogonal Frequency Division Multiplexing (DFT-s-OFDM) systems, yet offer improved spectral efficiency. Simulation results demonstrate that the CRML detector can achieve the same bit error rate (BER) performance as the ML detector in DFT-s-SEFDM systems at reduced computational complexity. Specifically, compared to a conventional ML detector, it is shown that CRML can decrease the search region by up to 2M times where M denotes the constellation cardinality. Depending on parameter configuration, CRML can offer up to two orders of magnitude improvement in execution runtime performance. CRML is best-suited to applications with small system sizes, for example, in narrowband Internet of Things (NB-IoT) networks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Advancing Speech Recognition With No Speech Or With Noisy Speech.\n \n \n \n \n\n\n \n Krishna, G.; Tran, C.; Carnahan, M.; and Tewfik, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AdvancingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902943,\n  author = {G. Krishna and C. Tran and M. Carnahan and A. Tewfik},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Advancing Speech Recognition With No Speech Or With Noisy Speech},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper we demonstrate end to end continuous speech recognition (CSR) using electroencephalography (EEG) signals with no speech signal as input. An attention model based automatic speech recognition (ASR) and connectionist temporal classification (CTC) based ASR systems were implemented for performing recognition. We further demonstrate CSR for noisy speech by fusing with EEG features.},\n  keywords = {electroencephalography;medical signal processing;signal classification;speech recognition;speech signal;connectionist temporal classification;CSR;noisy speech;continuous speech recognition;electroencephalography signals;automatic speech recognition;ASR systems;EEG features;Electroencephalography;Brain modeling;Speech recognition;Feature extraction;Decoding;Acoustics;Training;electroencephalograpgy (EEG);speech recognition;deep learning;CTC;attention;technology accessibility},\n  doi = {10.23919/EUSIPCO.2019.8902943},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532784.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we demonstrate end to end continuous speech recognition (CSR) using electroencephalography (EEG) signals with no speech signal as input. An attention model based automatic speech recognition (ASR) and connectionist temporal classification (CTC) based ASR systems were implemented for performing recognition. We further demonstrate CSR for noisy speech by fusing with EEG features.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Non-Centralized Navigation for Source Localization by Cooperative UAVs.\n \n \n \n \n\n\n \n Guerra, A.; Dardari, D.; and Djurić, P. M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Non-CentralizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902944,\n  author = {A. Guerra and D. Dardari and P. M. Djurić},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Non-Centralized Navigation for Source Localization by Cooperative UAVs},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we propose a distributed solution to the navigation of a population of unmanned aerial vehicles (UAVs) to best localize a static source. The network is considered heterogeneous with UAVs equipped with received signal strength (RSS) sensors from which it is possible to estimate the distance from the source and/or the direction of arrival through adhoc rotations. This diversity in gathering and processing RSS measurements mitigates the loss of localization accuracy due to the adoption of low-complexity sensors. The UAVs plan their trajectories on-the-fly and in a distributed fashion. The collected data are disseminated through the network via multi-hops, therefore being subject to latency. Since not all the paths are equal in terms of information gathering rewards, the motion planning is formulated as a minimization of the uncertainty of the source position under UAV kinematic and anti-collision constraints and performed by 3D non-linear programming. The proposed analysis takes into account non-line-of-sight (NLOS) channel conditions as well as measurement age caused by the latency constraints in communication.},\n  keywords = {autonomous aerial vehicles;nonlinear programming;path planning;RSSI;source position;UAV kinematic;3D nonlinear programming;noncentralized navigation;source localization;UAV;unmanned aerial vehicles;static source;received signal strength sensors;direction of arrival;nonline-of-sight channel conditions;information gathering rewards;low-complexity sensors;localization accuracy;gathering processing RSS measurements;Navigation;Sensors;Distance measurement;Trajectory;Drones;Europe;Unmanned aerial vehicles;RSS localization;UAV navigation;Information gathering},\n  doi = {10.23919/EUSIPCO.2019.8902944},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533708.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a distributed solution to the navigation of a population of unmanned aerial vehicles (UAVs) to best localize a static source. The network is considered heterogeneous with UAVs equipped with received signal strength (RSS) sensors from which it is possible to estimate the distance from the source and/or the direction of arrival through adhoc rotations. This diversity in gathering and processing RSS measurements mitigates the loss of localization accuracy due to the adoption of low-complexity sensors. The UAVs plan their trajectories on-the-fly and in a distributed fashion. The collected data are disseminated through the network via multi-hops, therefore being subject to latency. Since not all the paths are equal in terms of information gathering rewards, the motion planning is formulated as a minimization of the uncertainty of the source position under UAV kinematic and anti-collision constraints and performed by 3D non-linear programming. The proposed analysis takes into account non-line-of-sight (NLOS) channel conditions as well as measurement age caused by the latency constraints in communication.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n DNN Speaker Embeddings Using Autoencoder Pre-Training.\n \n \n \n \n\n\n \n Khan, U.; and Hernando, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DNNPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902945,\n  author = {U. Khan and J. Hernando},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {DNN Speaker Embeddings Using Autoencoder Pre-Training},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Over the last years, i-vectors have been the state-of-the-art approach in speaker recognition. Recent improvements in deep learning have increased the discriminative quality of i-vectors. However, deep learning architectures require a large amount of labeled background data which is difficult in practice. The aim of this paper is to propose an alternative scheme in order to reduce the need of labeled data. We propose the use of autoencoder pre-training in a speaker verification task. First, we train an autoencoder in an unsupervised way, using a large amount of unlabeled background data. Then, we train a Deep Neural Network (DNN) initialized with the parameters of the pre-trained autoencoder. The DNN training is carried out in a supervised way using relatively small labeled background data. In the testing phase, we extract speaker embeddings as the output of an intermediate layer of the DNN. The training and evaluation were performed on VoxCeleb-2 and VoxCeleb1 databases, respectively. The experimental results have shown that by initializing DNN with the parameters of the pre-trained autoencoder, we have achieved a relative improvement of 21%, in terms of Equal Error Rate (EER), over the baseline i-vector/PLDA system.},\n  keywords = {learning (artificial intelligence);natural language processing;neural nets;speaker recognition;DNN speaker embeddings;autoencoder pre-training;speaker recognition;deep learning architectures;labeled background data;speaker verification task;unlabeled background data;deep neural network;VoxCeleb-2 databases;VoxCeleb1 databases;equal error rate;EER;baseline i-vector system;PLDA system;Training;Databases;Deep learning;Speaker recognition;Neurons;Europe;Signal processing;deep learning;autoencoders;i-vectors;speaker verification},\n  doi = {10.23919/EUSIPCO.2019.8902945},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533750.pdf},\n}\n\n
\n
\n\n\n
\n Over the last years, i-vectors have been the state-of-the-art approach in speaker recognition. Recent improvements in deep learning have increased the discriminative quality of i-vectors. However, deep learning architectures require a large amount of labeled background data which is difficult in practice. The aim of this paper is to propose an alternative scheme in order to reduce the need of labeled data. We propose the use of autoencoder pre-training in a speaker verification task. First, we train an autoencoder in an unsupervised way, using a large amount of unlabeled background data. Then, we train a Deep Neural Network (DNN) initialized with the parameters of the pre-trained autoencoder. The DNN training is carried out in a supervised way using relatively small labeled background data. In the testing phase, we extract speaker embeddings as the output of an intermediate layer of the DNN. The training and evaluation were performed on VoxCeleb-2 and VoxCeleb1 databases, respectively. The experimental results have shown that by initializing DNN with the parameters of the pre-trained autoencoder, we have achieved a relative improvement of 21%, in terms of Equal Error Rate (EER), over the baseline i-vector/PLDA system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Regression to a Linear Lower Bound With Outliers: An Exponentially Modified Gaussian Noise Model.\n \n \n \n \n\n\n \n Gori, J.; and Rioul, O.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RegressionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902946,\n  author = {J. Gori and O. Rioul},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Regression to a Linear Lower Bound With Outliers: An Exponentially Modified Gaussian Noise Model},\n  year = {2019},\n  pages = {1-5},\n  abstract = {A regression method to estimate a linear bound in the presence of outliers is discussed. An exponentially-modified Gaussian (EMG) noise model is proposed, based on a maximum entropy argument. The resulting “EMG regression” method is shown to encompass the classical linear regression (with Gaussian noise) and a minimum regression (with exponential noise) as special cases. Simulations are performed to assess the consistency of the regression as well as its resilience to model mismatch. We conclude with an example taken from a real-world study of human performance in rapid aiming with application to human computer interaction.},\n  keywords = {Gaussian noise;maximum entropy methods;regression analysis;minimum regression;exponential noise;exponentially modified Gaussian noise model;maximum entropy argument;classical linear regression;EMG regression method;Electromyography;Linear regression;Standards;Maximum likelihood estimation;Gaussian noise;Europe},\n  doi = {10.23919/EUSIPCO.2019.8902946},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533641.pdf},\n}\n\n
\n
\n\n\n
\n A regression method to estimate a linear bound in the presence of outliers is discussed. An exponentially-modified Gaussian (EMG) noise model is proposed, based on a maximum entropy argument. The resulting “EMG regression” method is shown to encompass the classical linear regression (with Gaussian noise) and a minimum regression (with exponential noise) as special cases. Simulations are performed to assess the consistency of the regression as well as its resilience to model mismatch. We conclude with an example taken from a real-world study of human performance in rapid aiming with application to human computer interaction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On Acoustic Modeling for Broadband Beamforming.\n \n \n \n \n\n\n \n Chhetri, A.; Mansour, M.; Kim, W.; and Pan, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902947,\n  author = {A. Chhetri and M. Mansour and W. Kim and G. Pan},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {On Acoustic Modeling for Broadband Beamforming},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this work, we describe limitations of the free-field propagation model for designing broadband beamformers for microphone arrays on a rigid surface. Towards this goal, we describe a general frame-work for quantifying the microphone array performance in a general wave-field by directly solving the acoustic wave equation. The model utilizes Finite-Element-Method (FEM) for evaluating the response of the microphone array surface to background 3D planar and spherical waves. The effectiveness of the framework is established by designing and evaluating a representative broadband beamformer under realistic acoustic conditions.},\n  keywords = {acoustic signal processing;array signal processing;finite element analysis;microphone arrays;acoustic modeling;broadband beamforming;free-field propagation model;rigid surface;microphone array performance;acoustic wave equation;Finite-Element-Method;microphone array surface;broadband beamformer;background 3D planar waves;background 3D spherical waves;Mathematical model;Microphone arrays;Acoustics;Array signal processing;Finite element analysis;Broadband communication;Beamforming;Microphone Arrays;Acoustics;FEM;Wave Equation.},\n  doi = {10.23919/EUSIPCO.2019.8902947},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529438.pdf},\n}\n\n
\n
\n\n\n
\n In this work, we describe limitations of the free-field propagation model for designing broadband beamformers for microphone arrays on a rigid surface. Towards this goal, we describe a general frame-work for quantifying the microphone array performance in a general wave-field by directly solving the acoustic wave equation. The model utilizes Finite-Element-Method (FEM) for evaluating the response of the microphone array surface to background 3D planar and spherical waves. The effectiveness of the framework is established by designing and evaluating a representative broadband beamformer under realistic acoustic conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Structure Learning via Hadamard Product of Correlation and Partial Correlation Matrices.\n \n \n \n \n\n\n \n Ashurbekova, K.; Achard, S.; and Forbes, F.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"StructurePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902948,\n  author = {K. Ashurbekova and S. Achard and F. Forbes},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Structure Learning via Hadamard Product of Correlation and Partial Correlation Matrices},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Structure learning is an active topic nowadays in different application areas, i.e. genetics, neuroscience. Classical conditional independences or marginal independences may not be sufficient to express complex relationships. This paper is introducing a new structure learning procedure where an edge in the graph corresponds to a non zero value of both correlation and partial correlation. Based on this new paradigm, we define an estimator and derive its theoretical properties. The asymptotic convergence of the proposed graph estimator and its rate are derived. Illustrations on a synthetic example and application to brain connectivity are displayed.},\n  keywords = {biology computing;brain;graph theory;learning (artificial intelligence);matrix algebra;nonzero value;graph estimator;neuroscience;classical conditional independences;marginal independences;structure learning procedure;brain connectivity;Hadamard product;partial correlation matrices;Correlation;Sparse matrices;Covariance matrices;Europe;Matrix decomposition;Error correction;structure learning;conditional independence;marginal independence;Hadamard product;soft-thresholding;brain connectivity},\n  doi = {10.23919/EUSIPCO.2019.8902948},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533370.pdf},\n}\n\n
\n
\n\n\n
\n Structure learning is an active topic nowadays in different application areas, i.e. genetics, neuroscience. Classical conditional independences or marginal independences may not be sufficient to express complex relationships. This paper is introducing a new structure learning procedure where an edge in the graph corresponds to a non zero value of both correlation and partial correlation. Based on this new paradigm, we define an estimator and derive its theoretical properties. The asymptotic convergence of the proposed graph estimator and its rate are derived. Illustrations on a synthetic example and application to brain connectivity are displayed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n NLOS Classification Based on RSS and Ranging Statistics Obtained from Low-Cost UWB Devices.\n \n \n \n \n\n\n \n Barral, V.; Escudero, C. J.; and García-Naya, J. A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"NLOSPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902949,\n  author = {V. Barral and C. J. Escudero and J. A. García-Naya},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {NLOS Classification Based on RSS and Ranging Statistics Obtained from Low-Cost UWB Devices},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Ultra-wideband (UWB) devices have been largely considered for indoor location systems due to their high accuracy. However, as in other wireless systems, such accuracy is significantly degraded under non-line-of-sight (NLOS) propagation conditions. Therefore, the identification of NLOS conditions is essential to mitigate inaccuracies due to NLOS propagation. Nonetheless, most of the techniques considered to identify NLOS situations are based on the study of the channel impulse response (CIR), which is not practical and even becomes unfeasible when employing low-cost UWB hardware. This is precisely the main motivation of this work, to introduce a classification system based on the statistics of both the received signal strength (RSS) and range available from low-cost UWB devices. We analyze the effect of considering different statistic sets of both the RSS and range as features to feed a support vector machine (SVM) classifier, which is experimentally evaluated by means of measurements carried out in a real scenario where both line-of-sight (LOS) and NLOS conditions are present.},\n  keywords = {statistical analysis;support vector machines;telecommunication computing;ultra wideband communication;wireless channels;LOS;SVM classifier;support vector machine classifier;received signal strength;CIR;channel impulse response;RSS;ultrawideband hardware devices;UWB hardware devices;statistic setting;NLOS classification system;NLOS propagation conditions;nonline-of-sight propagation conditions;wireless systems;indoor location systems;Support vector machines;Hardware;Receivers;Estimation;Distance measurement;Machine learning algorithms;Monitoring;Ultra-wideband;NLOS Classifi;cation;RSS;ranging;SVM},\n  doi = {10.23919/EUSIPCO.2019.8902949},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532534.pdf},\n}\n\n
\n
\n\n\n
\n Ultra-wideband (UWB) devices have been largely considered for indoor location systems due to their high accuracy. However, as in other wireless systems, such accuracy is significantly degraded under non-line-of-sight (NLOS) propagation conditions. Therefore, the identification of NLOS conditions is essential to mitigate inaccuracies due to NLOS propagation. Nonetheless, most of the techniques considered to identify NLOS situations are based on the study of the channel impulse response (CIR), which is not practical and even becomes unfeasible when employing low-cost UWB hardware. This is precisely the main motivation of this work, to introduce a classification system based on the statistics of both the received signal strength (RSS) and range available from low-cost UWB devices. We analyze the effect of considering different statistic sets of both the RSS and range as features to feed a support vector machine (SVM) classifier, which is experimentally evaluated by means of measurements carried out in a real scenario where both line-of-sight (LOS) and NLOS conditions are present.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint unmixing-deconvolution algorithms for hyperspectral images.\n \n \n \n \n\n\n \n Song, Y.; Djermoune, E. -.; Brie, D.; and Richard, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902950,\n  author = {Y. Song and E. -H. Djermoune and D. Brie and C. Richard},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint unmixing-deconvolution algorithms for hyperspectral images},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper combines supervised linear unmixing and deconvolution problems to increase the resolution of the abundance maps for industrial imaging systems. The joint unmixing-deconvolution (JUD) algorithm is introduced based on the Tikhonov regularization criterion for offline processing. In order to meet the needs of industrial applications, the proposed JUD algorithm is then extended for online processing by using a block Tikhonov criterion. The performance of JUD is increased by adding a non-negativity constraint which is implemented in a fast way using the quadratic penalty method and fast Fourier transform. The proposed algorithm is then assessed using both simulated and real hyperspectral images.},\n  keywords = {deconvolution;fast Fourier transforms;feature extraction;hyperspectral imaging;image reconstruction;joint unmixing-deconvolution algorithm;hyperspectral images;supervised linear unmixing;deconvolution problems;industrial imaging systems;Tikhonov regularization criterion;JUD algorithm;block Tikhonov criterion;Hyperspectral imaging;Deconvolution;Convolution;Signal processing algorithms;Two dimensional displays;Europe;Hyperspectral image unmixing;hyperspectral image deconvolution;non-negative Tikhonov regularization},\n  doi = {10.23919/EUSIPCO.2019.8902950},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533710.pdf},\n}\n\n
\n
\n\n\n
\n This paper combines supervised linear unmixing and deconvolution problems to increase the resolution of the abundance maps for industrial imaging systems. The joint unmixing-deconvolution (JUD) algorithm is introduced based on the Tikhonov regularization criterion for offline processing. In order to meet the needs of industrial applications, the proposed JUD algorithm is then extended for online processing by using a block Tikhonov criterion. The performance of JUD is increased by adding a non-negativity constraint which is implemented in a fast way using the quadratic penalty method and fast Fourier transform. The proposed algorithm is then assessed using both simulated and real hyperspectral images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n MWF-based speech dereverberation with a local microphone array and an external microphone.\n \n \n \n \n\n\n \n Ali, R.; van Waterschoot , T.; and Moonen, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"MWF-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902951,\n  author = {R. Ali and T. {van Waterschoot} and M. Moonen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {MWF-based speech dereverberation with a local microphone array and an external microphone},\n  year = {2019},\n  pages = {1-5},\n  abstract = {A method for estimating the relevant quantities in a multi-channel Wiener filter (MWF) for speech dereverberation is proposed for a microphone system consisting of a local microphone array (LMA) and a single external microphone (XM). Typically these MWF quantities can be estimated by considering pre-whitened correlation matrices with a dimension equal to the number of microphones in the system. By following another procedure involving a pre-whitening-transformation operation, it will be demonstrated that when a priori knowledge of the relative transfer function (RTF) vector pertaining to only the LMA is available and when the reverberant component of the signals received by the LMA is uncorrelated with that of the XM, the MWF quantities may be alternatively estimated from a 2 × 2 matrix. Simulations confirm that using such an estimate results in a similar performance to that obtained by using the higher-dimensional correlation matrix.},\n  keywords = {correlation methods;matrix algebra;microphone arrays;microphones;reverberation;speech processing;transfer functions;Wiener filters;MWF-based speech dereverberation;local microphone array;relevant quantities;multichannel Wiener filter;microphone system;LMA;single external microphone;XM;MWF quantities;pre-whitened correlation matrices;pre-whitening-transformation operation;relative transfer function vector;Matrix decomposition;Correlation;Europe;Signal processing;Microphone arrays;Eigenvalues and eigenfunctions;Multichannel Wiener Filter;Speech Dereverberation;Microphone Array;External Microphone},\n  doi = {10.23919/EUSIPCO.2019.8902951},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531235.pdf},\n}\n\n
\n
\n\n\n
\n A method for estimating the relevant quantities in a multi-channel Wiener filter (MWF) for speech dereverberation is proposed for a microphone system consisting of a local microphone array (LMA) and a single external microphone (XM). Typically these MWF quantities can be estimated by considering pre-whitened correlation matrices with a dimension equal to the number of microphones in the system. By following another procedure involving a pre-whitening-transformation operation, it will be demonstrated that when a priori knowledge of the relative transfer function (RTF) vector pertaining to only the LMA is available and when the reverberant component of the signals received by the LMA is uncorrelated with that of the XM, the MWF quantities may be alternatively estimated from a 2 × 2 matrix. Simulations confirm that using such an estimate results in a similar performance to that obtained by using the higher-dimensional correlation matrix.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Classification of Brainwaves Using Convolutional Neural Network.\n \n \n \n \n\n\n \n Joshi, S. R.; Headley, D. B.; Ho, K. C.; Paré, D.; and Nair, S. S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ClassificationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902952,\n  author = {S. R. Joshi and D. B. Headley and K. C. Ho and D. Paré and S. S. Nair},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Classification of Brainwaves Using Convolutional Neural Network},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Classification of brainwaves in recordings is of considerable interest to neuroscience and medical communities. Classification techniques used presently depend on the extraction of low-level features from the recordings, which in turn affects the classification performance. To alleviate this problem, this paper proposes an end-to-end approach using Convolutional Neural Network (CNN) which has been shown to detect complex patterns in a signal by exploiting its spatiotemporal nature. The present study uses time and frequency axes for the classification using synthesized Local Field Potential (LFP) data. The results are analyzed and compared with the FFT technique. In all the results, the CNN outperforms the FFT by a significant margin especially when the noise level is high. This study also sheds light on certain signal characteristics affecting network performance.},\n  keywords = {convolutional neural nets;fast Fourier transforms;medical signal processing;signal classification;medical communities;classification techniques;low-level features;classification performance;convolutional neural network;CNN;complex patterns;frequency axes;FFT technique;neuroscience;brainwave classification;synthesized local field potential data;signal characteristics;Signal to noise ratio;Feature extraction;Convolution;Filter banks;Training;Brain modeling;Convolutional Neural Network (CNN);Brainwaves Classification;Fourier Transform;FFT;Deep Learning},\n  doi = {10.23919/EUSIPCO.2019.8902952},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529424.pdf},\n}\n\n
\n
\n\n\n
\n Classification of brainwaves in recordings is of considerable interest to neuroscience and medical communities. Classification techniques used presently depend on the extraction of low-level features from the recordings, which in turn affects the classification performance. To alleviate this problem, this paper proposes an end-to-end approach using Convolutional Neural Network (CNN) which has been shown to detect complex patterns in a signal by exploiting its spatiotemporal nature. The present study uses time and frequency axes for the classification using synthesized Local Field Potential (LFP) data. The results are analyzed and compared with the FFT technique. In all the results, the CNN outperforms the FFT by a significant margin especially when the noise level is high. This study also sheds light on certain signal characteristics affecting network performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Enhanced Diffusion Learning Over Networks.\n \n \n \n \n\n\n \n Merched, R.; Vlaski, S.; and Sayed, A. H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EnhancedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902953,\n  author = {R. Merched and S. Vlaski and A. H. Sayed},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Enhanced Diffusion Learning Over Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This work develops a variation of diffusion learning by incorporating an adaptive construction for the combination weights through local fusion steps. This leads to an implementation with enhanced convergence rate and mean-square-error performance while maintaining the same level of complexity as standard implementations. The approach is based on formulating optimal or close-to-optimal learning and fusion steps using a proximity function rationale within neighborhoods. The first version of the algorithm employs exact fusion in the least-squares sense using inverses of uncertainty matrices. The second version replaces these matrices by diagonal approximations with reduced complexity. The result is an LMS-complexity scheme with improved performance for distributed learning over networks.},\n  keywords = {convergence;learning (artificial intelligence);least mean squares methods;matrix algebra;mean square error methods;LMS-complexity scheme;distributed learning;diffusion learning;adaptive construction;local fusion steps;enhanced convergence rate;mean-square-error performance;close-to-optimal learning;proximity function;Signal processing algorithms;Uncertainty;Europe;Signal processing;Covariance matrices;Time measurement;Convergence;diffusion networks;fusion;least-squares;adaptation;combination weights},\n  doi = {10.23919/EUSIPCO.2019.8902953},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530382.pdf},\n}\n\n
\n
\n\n\n
\n This work develops a variation of diffusion learning by incorporating an adaptive construction for the combination weights through local fusion steps. This leads to an implementation with enhanced convergence rate and mean-square-error performance while maintaining the same level of complexity as standard implementations. The approach is based on formulating optimal or close-to-optimal learning and fusion steps using a proximity function rationale within neighborhoods. The first version of the algorithm employs exact fusion in the least-squares sense using inverses of uncertainty matrices. The second version replaces these matrices by diagonal approximations with reduced complexity. The result is an LMS-complexity scheme with improved performance for distributed learning over networks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Identifying Stable Components of Matrix /Tensor Factorizations via Low-Rank Approximation of Inter-Factorization Similarity.\n \n \n \n\n\n \n Eyndhoven, S. V.; Vervliet, N.; Lathauwer, L. D.; and Huffel, S. V.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902954,\n  author = {S. V. Eyndhoven and N. Vervliet and L. D. Lathauwer and S. V. Huffel},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Identifying Stable Components of Matrix /Tensor Factorizations via Low-Rank Approximation of Inter-Factorization Similarity},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Many interesting matrix decompositions/factorizations, and especially many tensor decompositions, have to be solved by non-convex optimization-based algorithms, that may converge to local optima. Hence, when interpretability of the components is a requirement, practitioners have to compute the decomposition (e.g. CPD) many times, with different initializations, to verify whether the components are reproducible over repetitions of the optimization. However, it is non-trivial to assess such reliability or stability when multiple local optima are encountered. We propose an efficient algorithm that clusters the different repetitions of the decomposition according to the local optimum that they belong to, offering a diagnostic tool to practitioners. Our algorithm employs a graph-based representation of the decomposition, in which every repetition corresponds to a node, and similarities between components are encoded as edges. Clustering is then performed by exploiting a property known as cycle consistency, leading to a low-rank approximation of the graph. We demonstrate the applicability of our method on realistic electroencephalographic (EEG) data and synthetic data.},\n  keywords = {approximation theory;electroencephalography;graph theory;matrix decomposition;optimisation;tensors;realistic electroencephalographic data;non-convex optimization-based algorithms;graph-based representation;multiple local optima;reliability;tensor decompositions;inter-factorization similarity;identifying stable components;low-rank approximation;Matrix decomposition;Tensors;Optimization;Signal processing algorithms;Silicon;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2019.8902954},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Many interesting matrix decompositions/factorizations, and especially many tensor decompositions, have to be solved by non-convex optimization-based algorithms, that may converge to local optima. Hence, when interpretability of the components is a requirement, practitioners have to compute the decomposition (e.g. CPD) many times, with different initializations, to verify whether the components are reproducible over repetitions of the optimization. However, it is non-trivial to assess such reliability or stability when multiple local optima are encountered. We propose an efficient algorithm that clusters the different repetitions of the decomposition according to the local optimum that they belong to, offering a diagnostic tool to practitioners. Our algorithm employs a graph-based representation of the decomposition, in which every repetition corresponds to a node, and similarities between components are encoded as edges. Clustering is then performed by exploiting a property known as cycle consistency, leading to a low-rank approximation of the graph. We demonstrate the applicability of our method on realistic electroencephalographic (EEG) data and synthetic data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimum Trajectory Planning for Robotic Data Ferries in Delay Tolerant Wireless Sensor Networks.\n \n \n \n \n\n\n \n Nurellari, E.; Licea, D. B.; and Ghogho, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OptimumPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902955,\n  author = {E. Nurellari and D. B. Licea and M. Ghogho},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Optimum Trajectory Planning for Robotic Data Ferries in Delay Tolerant Wireless Sensor Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We consider the issue of energy efficient data collection in the context of a mobile robot-aided delay tolerant wireless sensor network (DTSN). The latter is composed of static nodes (SNs), a fusion center (FC) and a mobile robot (MR), which acts as a data ferry in order to reduce energy consumption at the SNs, thereby increasing their lifetime. The considered wireless channel model accounts for both path loss and shadowing. We propose a method to optimise the trajectory of the MR so as to minimise the overall energy consumption of the DTSN, while controlling the latency of the end-to-end transmission and maintaining the number of bits in the SNs' buffers bounded. Simulation results show the effectiveness of the proposed solution in reducing the energy consumption of the DTSN.},\n  keywords = {energy conservation;mobile robots;path planning;telecommunication control;telecommunication power management;trajectory control;wireless channels;wireless sensor networks;energy efficient data collection;DTSN;static nodes;fusion center;energy consumption;wireless channel model;path loss;end-to-end transmission;optimum trajectory planning;robotic data ferries;delay tolerant wireless sensor networks;SN buffers;mobile robot-aided delay tolerant wireless sensor network;shadowing;Wireless sensor networks;Robot sensing systems;Energy consumption;Shadow mapping;Optimization;Buffer storage;Wireless sensor network;energy efficiency;mobile robot;delay tolerant networks},\n  doi = {10.23919/EUSIPCO.2019.8902955},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533942.pdf},\n}\n\n
\n
\n\n\n
\n We consider the issue of energy efficient data collection in the context of a mobile robot-aided delay tolerant wireless sensor network (DTSN). The latter is composed of static nodes (SNs), a fusion center (FC) and a mobile robot (MR), which acts as a data ferry in order to reduce energy consumption at the SNs, thereby increasing their lifetime. The considered wireless channel model accounts for both path loss and shadowing. We propose a method to optimise the trajectory of the MR so as to minimise the overall energy consumption of the DTSN, while controlling the latency of the end-to-end transmission and maintaining the number of bits in the SNs' buffers bounded. Simulation results show the effectiveness of the proposed solution in reducing the energy consumption of the DTSN.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Convolutional LSTM-based Long-Term Spectrum Prediction for Dynamic Spectrum Access.\n \n \n \n \n\n\n \n Shawel, B. S.; Woldegebreal, D. H.; and Pollin, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ConvolutionalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902956,\n  author = {B. S. Shawel and D. H. Woldegebreal and S. Pollin},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Convolutional LSTM-based Long-Term Spectrum Prediction for Dynamic Spectrum Access},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The concept of Dynamic Spectrum Access (DSA) with Cognitive Radio (CR) as a key enabler is considered as a promising solution to alleviate the inefficient use of the radio spectrum. Relying on the presumed knowledge of the spectrum occupancy from sensing, geo-location databases or prediction, DSA allows opportunistic users to share spectrum bands in a non-interfering manner when the bands are not in use by their respective incumbent owners. Several literatures have presented prediction algorithms in order to get meaningful data about future spectrum usage; however, most of them only exploit the spectrum data in time, space and/or frequency dimension(s) to provide a short term, i.e., single next step, prediction. In this work, we propose a novel approach with Convolutional Long Short-Term Memory (ConvLSTM) Deep Learning Neural Network for a long-term temporal prediction that is trained to learn joint spatial-spectral-temporal dependencies observed in spectrum usage. Real environment measurement data from Electrosense are used to evaluate the prediction accuracy of the proposed network for increasing future time steps and different spectrum channels. Prediction result for the next 180 minutes for UHF bands of 450-520 MHz is presented for a 4 km2 area in Spain indicating the prominent and stable prediction performance of ConvLSTM network.},\n  keywords = {cognitive radio;convolutional neural nets;learning (artificial intelligence);radio spectrum management;recurrent neural nets;telecommunication computing;prediction accuracy;spectrum channels;stable prediction performance;Dynamic Spectrum Access;DSA;Cognitive Radio;radio spectrum;spectrum occupancy;geo-location databases;spectrum bands;prediction algorithms;spectrum usage;spectrum data;long-term temporal prediction;convolutional long short-term memory deep learning neural network;convolutional LSTM-based long-term spectrum prediction;time 180.0 min;frequency 450.0 MHz to 520.0 MHz;Convolution;Time-frequency analysis;Predictive models;Power measurement;Weight measurement;Feature extraction;Europe;ConvLSTM Neural Network;Deep Learning Network;Long term Prediction;Spectrum Prediction;Dynamic Spectrum Access},\n  doi = {10.23919/EUSIPCO.2019.8902956},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533330.pdf},\n}\n\n
\n
\n\n\n
\n The concept of Dynamic Spectrum Access (DSA) with Cognitive Radio (CR) as a key enabler is considered as a promising solution to alleviate the inefficient use of the radio spectrum. Relying on the presumed knowledge of the spectrum occupancy from sensing, geo-location databases or prediction, DSA allows opportunistic users to share spectrum bands in a non-interfering manner when the bands are not in use by their respective incumbent owners. Several literatures have presented prediction algorithms in order to get meaningful data about future spectrum usage; however, most of them only exploit the spectrum data in time, space and/or frequency dimension(s) to provide a short term, i.e., single next step, prediction. In this work, we propose a novel approach with Convolutional Long Short-Term Memory (ConvLSTM) Deep Learning Neural Network for a long-term temporal prediction that is trained to learn joint spatial-spectral-temporal dependencies observed in spectrum usage. Real environment measurement data from Electrosense are used to evaluate the prediction accuracy of the proposed network for increasing future time steps and different spectrum channels. Prediction result for the next 180 minutes for UHF bands of 450-520 MHz is presented for a 4 km2 area in Spain indicating the prominent and stable prediction performance of ConvLSTM network.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Algebraic Framework for Digital Envelope Modulation.\n \n \n \n \n\n\n \n Bicaïs, S.; and Doré, J. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902957,\n  author = {S. Bicaïs and J. -B. Doré},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {An Algebraic Framework for Digital Envelope Modulation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Sub-THz communications based on coherent re-ceivers suffer from strong phase impairments issued by oscillators. Envelope detection, inherently robust to this impairment, is hence considered for the design of sub-THz systems. This paper proposes an algebraic framework for envelope modulation. In the first place, we introduce a Hilbert space to represent waveforms with non-negative real values. This space is defined by transport of structure of the usual signal-space L2. So, existing schemes developed for real-valued signals can be exploited upon envelope modulation. In the second place, it is shown that the proposed framework provides powerful tools to design new envelope modulation schemes. To do so, we present the transmission of an Inphase Quadrature signal upon an envelope modulation to prevent the impact of phase noise on communication performance. We also demonstrate that constraints on embedded analog-to-digital converters can be relaxed with the use of orthogonal non-negative waveforms.},\n  keywords = {algebra;Hilbert spaces;modulation;signal detection;envelope detection;algebraic framework;Hilbert space;nonnegative real values;real-valued signals;envelope modulation schemes;nonnegative waveforms;digital envelope modulation;coherent receivers;strong phase impairments;signal-space L2;sub-THz communications;inphase quadrature signal;phase noise;communication performance;embedded analog-to-digital converters;orthogonal nonnegative waveforms;Hilbert space;Envelope detectors;Receivers;Phase noise;Frequency modulation;Sub-THz communications;Amplitude modulation;Envelope detectors;Linear algebra;Hilbert space},\n  doi = {10.23919/EUSIPCO.2019.8902957},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530799.pdf},\n}\n\n
\n
\n\n\n
\n Sub-THz communications based on coherent re-ceivers suffer from strong phase impairments issued by oscillators. Envelope detection, inherently robust to this impairment, is hence considered for the design of sub-THz systems. This paper proposes an algebraic framework for envelope modulation. In the first place, we introduce a Hilbert space to represent waveforms with non-negative real values. This space is defined by transport of structure of the usual signal-space L2. So, existing schemes developed for real-valued signals can be exploited upon envelope modulation. In the second place, it is shown that the proposed framework provides powerful tools to design new envelope modulation schemes. To do so, we present the transmission of an Inphase Quadrature signal upon an envelope modulation to prevent the impact of phase noise on communication performance. We also demonstrate that constraints on embedded analog-to-digital converters can be relaxed with the use of orthogonal non-negative waveforms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online Multiscale-Data Classification Based on Multikernel Adaptive Filtering with Application to Sentiment Analysis.\n \n \n \n \n\n\n \n Iwamoto, R.; and Yukawa, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OnlinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902958,\n  author = {R. Iwamoto and M. Yukawa},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Online Multiscale-Data Classification Based on Multikernel Adaptive Filtering with Application to Sentiment Analysis},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We present an online method for multiscale data classification, using the multikernel adaptive filtering framework. The target application is Twitter sentiment analysis, which is a notoriously challenging task of natural language processing. This is because (i) each tweet is typically short, and (ii) domain-specific expressions tend to be used. The efficacy of the proposed multiscale online method is studied with dataset of Twitter. Simulation results show that the proposed approach achieves a higher F1 score than the other online-classification methods, and also outperforms the nonlinear support vector machine.},\n  keywords = {adaptive filters;natural language processing;pattern classification;social networking (online);support vector machines;online multiscale-data classification;sentiment analysis;multikernel adaptive filtering framework;natural language processing;domain-specific expressions;Dictionaries;Kernel;Twitter;Task analysis;Sentiment analysis;Approximation algorithms;reproducing kernel;sentiment analysis;online learning},\n  doi = {10.23919/EUSIPCO.2019.8902958},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533262.pdf},\n}\n\n
\n
\n\n\n
\n We present an online method for multiscale data classification, using the multikernel adaptive filtering framework. The target application is Twitter sentiment analysis, which is a notoriously challenging task of natural language processing. This is because (i) each tweet is typically short, and (ii) domain-specific expressions tend to be used. The efficacy of the proposed multiscale online method is studied with dataset of Twitter. Simulation results show that the proposed approach achieves a higher F1 score than the other online-classification methods, and also outperforms the nonlinear support vector machine.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Large-scale Canonical Polyadic Decomposition via Regular Tensor Sampling.\n \n \n \n \n\n\n \n Kanatsoulis, C. I.; and Sidiropoulos, N. D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Large-scalePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902959,\n  author = {C. I. Kanatsoulis and N. D. Sidiropoulos},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Large-scale Canonical Polyadic Decomposition via Regular Tensor Sampling},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Tensor decomposition models have proven to be effective analysis tools in various applications, including signal processing, machine learning, and communications, to name a few. Canonical polyadic decomposition (CPD) is a very popular model, which decomposes a higher order tensor signal into a sum of rank 1 terms. However, when the tensor size gets big, computing the CPD becomes a lot more challenging. Previous works proposed using random (generalized) tensor sampling or compression to alleviate this challenge. In this work, we propose using a regular tensor sampling framework instead. We show that by appropriately selecting the sampling mechanism, we can simultaneously control memory and computational complexity, while guaranteeing identifiability at the same time. Numerical experiments with synthetic and real data showcase the effectiveness of our approach.},\n  keywords = {computational complexity;data compression;signal sampling;tensors;scale canonical polyadic decomposition;tensor decomposition models;effective analysis tools;CPD;higher order tensor signal;tensor size;compression;regular tensor sampling framework;sampling mechanism;computational complexity;Tensors;Signal processing;Signal processing algorithms;Computational modeling;Mathematical model;Europe;Slabs;tensor;big data;large-scale;canonical polyadic decomposition;PARAFAC;identifiability},\n  doi = {10.23919/EUSIPCO.2019.8902959},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533975.pdf},\n}\n\n
\n
\n\n\n
\n Tensor decomposition models have proven to be effective analysis tools in various applications, including signal processing, machine learning, and communications, to name a few. Canonical polyadic decomposition (CPD) is a very popular model, which decomposes a higher order tensor signal into a sum of rank 1 terms. However, when the tensor size gets big, computing the CPD becomes a lot more challenging. Previous works proposed using random (generalized) tensor sampling or compression to alleviate this challenge. In this work, we propose using a regular tensor sampling framework instead. We show that by appropriately selecting the sampling mechanism, we can simultaneously control memory and computational complexity, while guaranteeing identifiability at the same time. Numerical experiments with synthetic and real data showcase the effectiveness of our approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Simple Sparsity-aware Feature LMS Algorithm.\n \n \n \n \n\n\n \n Chaves, G. S.; Lima, M. V. S.; Yazdanpanah, H.; Diniz, P. S. R.; and Ferreira, T. N.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902960,\n  author = {G. S. Chaves and M. V. S. Lima and H. Yazdanpanah and P. S. R. Diniz and T. N. Ferreira},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Simple Sparsity-aware Feature LMS Algorithm},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Many real systems have inherently some type of sparsity. Recently, the feature least-mean square (F-LMS) has been proposed to exploit hidden sparsity. Unlike the existing algorithms, the F-LMS algorithm performs a linear combination of the adaptive coefficients to reveal and then exploit the hidden sparsity. However, many systems have also plain besides hidden sparsity, and the F-LMS algorithm is not able to exploit the former. In this paper, we propose a new algorithm, named simple sparsity-aware F-LMS (SSF-LMS) algorithm, that is capable of exploiting both kinds of sparsity simultaneously. The hidden sparsity is exploited just like in the F-LMS algorithm, whereas the plain sparsity is exploited by means of the discard function applied to the filter coefficients. By doing so, the proposed SSFLMS algorithm not only outperforms the F-LMS algorithm when plain sparsity is also observed, but also requires fewer arithmetic operations. Numerical results show that the proposed algorithm has faster speed of convergence and reaches lower steady-state mean-squared error (MSE) than the F-LMS and classical algorithms, when the system has plain and hidden sparsity.},\n  keywords = {least mean squares methods;SSF-LMS;hidden sparsity;F-LMS algorithm;plain sparsity;SSFLMS algorithm;simple sparsity-aware feature LMS algorithm;feature least-mean square;steady-state mean-squared error;Signal processing algorithms;Europe;Signal processing;Linear programming;Convergence;Cost function;Sparse matrices;adaptive filtering;LMS algorithm;feature matrix;discard function;sparsity},\n  doi = {10.23919/EUSIPCO.2019.8902960},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528569.pdf},\n}\n\n
\n
\n\n\n
\n Many real systems have inherently some type of sparsity. Recently, the feature least-mean square (F-LMS) has been proposed to exploit hidden sparsity. Unlike the existing algorithms, the F-LMS algorithm performs a linear combination of the adaptive coefficients to reveal and then exploit the hidden sparsity. However, many systems have also plain besides hidden sparsity, and the F-LMS algorithm is not able to exploit the former. In this paper, we propose a new algorithm, named simple sparsity-aware F-LMS (SSF-LMS) algorithm, that is capable of exploiting both kinds of sparsity simultaneously. The hidden sparsity is exploited just like in the F-LMS algorithm, whereas the plain sparsity is exploited by means of the discard function applied to the filter coefficients. By doing so, the proposed SSFLMS algorithm not only outperforms the F-LMS algorithm when plain sparsity is also observed, but also requires fewer arithmetic operations. Numerical results show that the proposed algorithm has faster speed of convergence and reaches lower steady-state mean-squared error (MSE) than the F-LMS and classical algorithms, when the system has plain and hidden sparsity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effectiveness of Cross-Domain Architectures for Whisper-to-Normal Speech Conversion.\n \n \n \n \n\n\n \n Parmar, M.; Doshi, S.; Shah, N. J.; Patel, M.; and Patil, H. A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EffectivenessPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902961,\n  author = {M. Parmar and S. Doshi and N. J. Shah and M. Patel and H. A. Patil},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Effectiveness of Cross-Domain Architectures for Whisper-to-Normal Speech Conversion},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Though whisper is a typical way of natural speech communication, it is different from normal speech w.r.t. to speech production and perception perspective. Recently, authors have proposed Generative Adversarial Network (GAN)-based architecture (namely, DiscoGAN) to discover such cross-domain relationships for whisper-to-normal speech (WHSP2SPCH) conversion. In this paper, we extend this study with detailed theory and analysis. In addition, Cycle-consistent Adversarial Network (CycleGAN) is also proposed for the cross-domain WHSP2SPCH conversion. We observe that the proposed systems yield objective results that are comparable to the baseline, and are superior in terms of fundamental frequency (i.e.,F0) prediction. Moreover, we observe that the proposed cross-domain architectures have been preferred 55.75% (on average) times more compared to the traditional GAN in the subjective evaluations. This reveals that the proposed method yields a more natural-sounding normal speech converted from whispered speech.},\n  keywords = {speech processing;cross-domain architectures;whisper-to-normal speech conversion;natural speech communication;normal speech w.r.t;speech production;perception perspective;Generative Adversarial Network-based architecture;cross-domain relationships;Cycle-consistent Adversarial Network;cross-domain WHSP2SPCH conversion;natural-sounding normal speech;whispered speech;Gallium nitride;Generators;Linear programming;Computer architecture;Generative adversarial networks;Task analysis;Cepstral analysis;Whisper;Normal Speech;Cross-domain;GAN;DiscoGAN;CycleGAN},\n  doi = {10.23919/EUSIPCO.2019.8902961},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533531.pdf},\n}\n\n
\n
\n\n\n
\n Though whisper is a typical way of natural speech communication, it is different from normal speech w.r.t. to speech production and perception perspective. Recently, authors have proposed Generative Adversarial Network (GAN)-based architecture (namely, DiscoGAN) to discover such cross-domain relationships for whisper-to-normal speech (WHSP2SPCH) conversion. In this paper, we extend this study with detailed theory and analysis. In addition, Cycle-consistent Adversarial Network (CycleGAN) is also proposed for the cross-domain WHSP2SPCH conversion. We observe that the proposed systems yield objective results that are comparable to the baseline, and are superior in terms of fundamental frequency (i.e.,F0) prediction. Moreover, we observe that the proposed cross-domain architectures have been preferred 55.75% (on average) times more compared to the traditional GAN in the subjective evaluations. This reveals that the proposed method yields a more natural-sounding normal speech converted from whispered speech.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Demodulation Algorithm Based on Higher Order Synchrosqueezing.\n \n \n \n \n\n\n \n Pham, D. -.; and Meignen, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DemodulationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902962,\n  author = {D. -H. Pham and S. Meignen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Demodulation Algorithm Based on Higher Order Synchrosqueezing},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper addresses the problem of detecting and retrieving amplitude- and frequency-modulated (AM-FM) components or modes of a multicomponent signal from its time-frequency representation (TFR) corresponding to its short-time Fourier transform. For that purpose, we introduce a novel technique that combines a high order synchrosqueezing transform (FSSTN) with a demodulation procedure. Numerical results on a multicomponent signal, both in noise-free and noisy cases, show the benefits for mode reconstruction of the proposed approach over similar techniques that do not make use of demodulation.},\n  keywords = {demodulation;Fourier transforms;signal representation;time-frequency analysis;mode reconstruction;demodulation algorithm;higher order synchrosqueezing;frequency-modulated;AM-FM;multicomponent signal;time-frequency representation;TFR;high order synchrosqueezing;demodulation procedure;noise-free;noisy cases;Demodulation;Estimation;Fourier transforms;Continuous wavelet transforms;Time-frequency analysis;time-frequency analysis;AM-FM mode;multicomponent signals;synchrosqueezing techniques;demodulation},\n  doi = {10.23919/EUSIPCO.2019.8902962},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532212.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the problem of detecting and retrieving amplitude- and frequency-modulated (AM-FM) components or modes of a multicomponent signal from its time-frequency representation (TFR) corresponding to its short-time Fourier transform. For that purpose, we introduce a novel technique that combines a high order synchrosqueezing transform (FSSTN) with a demodulation procedure. Numerical results on a multicomponent signal, both in noise-free and noisy cases, show the benefits for mode reconstruction of the proposed approach over similar techniques that do not make use of demodulation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Transient Analysis of Partitioned-Block Frequency-Domain Adaptive Filters.\n \n \n \n \n\n\n \n Yang, F.; Enzner, G.; and Yang, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TransientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902963,\n  author = {F. Yang and G. Enzner and J. Yang},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Transient Analysis of Partitioned-Block Frequency-Domain Adaptive Filters},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The frequency-domain adaptive filter (FDAF) is very useful for instance acoustic signal processing. The partitioned-block FDAF (PBFDAF) is a generalization of the FDAF and becomes more popular due to its low latency. Some efforts have been done toward the convergence analysis of PBFDAFs, but they usually use strong approximations and hence came to inaccurate results. This paper presents a unified approach to the transient analysis of both the constrained and unconstrained PBFDAFs based on the overlap-save structure. Using the independence assumption, we derive the analytical expressions for the mean and mean-square performance of PBFDAFs. Our analysis does not assume a specific model for the inputs and provides a quite general framework. Computer simulations confirm a good match between our theory and experimental results.},\n  keywords = {acoustic signal processing;adaptive filters;adaptive signal processing;echo suppression;filtering theory;frequency-domain analysis;least mean squares methods;transient analysis;transient analysis;constrained PBFDAFs;unconstrained PBFDAFs;partitioned-block frequency-domain adaptive filters;frequency-domain adaptive filter;instance acoustic signal processing;partitioned-block FDAF;PBFDAF;convergence analysis;strong approximations;Frequency-domain analysis;Transient analysis;Convergence;Signal processing algorithms;Analytical models;Covariance matrices;Europe;Adaptive filtering;frequency domain;convergence analysis;transient behavior},\n  doi = {10.23919/EUSIPCO.2019.8902963},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530181.pdf},\n}\n\n
\n
\n\n\n
\n The frequency-domain adaptive filter (FDAF) is very useful for instance acoustic signal processing. The partitioned-block FDAF (PBFDAF) is a generalization of the FDAF and becomes more popular due to its low latency. Some efforts have been done toward the convergence analysis of PBFDAFs, but they usually use strong approximations and hence came to inaccurate results. This paper presents a unified approach to the transient analysis of both the constrained and unconstrained PBFDAFs based on the overlap-save structure. Using the independence assumption, we derive the analytical expressions for the mean and mean-square performance of PBFDAFs. Our analysis does not assume a specific model for the inputs and provides a quite general framework. Computer simulations confirm a good match between our theory and experimental results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Energy Separation Algorithm Based Spectrum Estimation for Very Short Duration of Speech.\n \n \n \n \n\n\n \n Patil, H. A.; and Viswanath, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EnergyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902964,\n  author = {H. A. Patil and S. Viswanath},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Energy Separation Algorithm Based Spectrum Estimation for Very Short Duration of Speech},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we propose a novel method of estimation of short-time spectrum for analysis of speech signals in the closed phase regions of glottal activity. This method uses Teager Energy Operator (TEO) and a related Energy Separation Algorithm (ESA) iteratively, along with the design of digital resonator to estimate formants from a very short duration of the speech. The spectrum of cascade of these four resonators is referred to as our proposed ESA spectrum of speech. The novelty of the proposed approach lies in using very short duration of analysis speech frame that is synchronized with glottal closure instant (i.e., about 1-2 ms) to estimate the proposed spectrum in order to ensure that the vocal tract system characteristics do not change much within this interval and to alleviate erroneous estimation of formants due to nonlinear interaction of excitation source with the vocal tract system. To demonstrate the effectiveness of proposed algorithm for formant estimation on speech data, we have used 1.5 ms speech signal corresponding to closed phase glottal cycles derived from a male speaker of CMU-ARCTIC database.},\n  keywords = {filtering theory;speech processing;spectrum estimation;short-time spectrum;speech signals;closed phase regions;glottal activity;Teager Energy Operator;related Energy Separation Algorithm;digital resonator;formants;resonators;ESA spectrum;analysis speech frame;glottal closure instant;vocal tract system characteristics;erroneous estimation;formant estimation;speech data;closed phase glottal cycles;time 1.5 ms;time 1.0 ms to 2.0 ms;Estimation;Resonant frequency;Frequency estimation;Bandwidth;Speech processing;Frequency response;Gabor filters;Teager Energy Operator;Energy Separation Algorithm;SESA;Digital Resonator;ESA Spectrum},\n  doi = {10.23919/EUSIPCO.2019.8902964},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533768.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a novel method of estimation of short-time spectrum for analysis of speech signals in the closed phase regions of glottal activity. This method uses Teager Energy Operator (TEO) and a related Energy Separation Algorithm (ESA) iteratively, along with the design of digital resonator to estimate formants from a very short duration of the speech. The spectrum of cascade of these four resonators is referred to as our proposed ESA spectrum of speech. The novelty of the proposed approach lies in using very short duration of analysis speech frame that is synchronized with glottal closure instant (i.e., about 1-2 ms) to estimate the proposed spectrum in order to ensure that the vocal tract system characteristics do not change much within this interval and to alleviate erroneous estimation of formants due to nonlinear interaction of excitation source with the vocal tract system. To demonstrate the effectiveness of proposed algorithm for formant estimation on speech data, we have used 1.5 ms speech signal corresponding to closed phase glottal cycles derived from a male speaker of CMU-ARCTIC database.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Impact Sounds Classification for Interactive Applications via Discriminative Dictionary Learning.\n \n \n \n \n\n\n \n Tzagkarakis, C.; Stefanakis, N.; and Tzagkarakis, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ImpactPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902965,\n  author = {C. Tzagkarakis and N. Stefanakis and G. Tzagkarakis},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Impact Sounds Classification for Interactive Applications via Discriminative Dictionary Learning},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Classification of impulsive events produced from the acoustic stimulation of everyday objects opens the door to exciting interactive applications, as for example, gestural control of sound synthesis. Such events may exhibit significant variability, which makes their recognition a very challenging task. Furthermore, the fact that interactive systems require an immediate response to achieve low latency in real-time scenarios, poses major constraints to be overcome. This paper focuses on the design of a novel method for identifying the sound-producing objects, as well as the location of impact of each event, under a low-latency assumption. To this end, a sparse representation coding framework is adopted based on learned discriminative dictionaries from short training and testing data. The performance of the proposed method is evaluated on a set of real impact sounds and compared against a nearest neighbour classifier. The experimental results demonstrate the high performance improvements of our proposed method, both in terms of classification accuracy and low latency.},\n  keywords = {acoustic signal processing;interactive systems;nearest neighbour methods;signal classification;signal representation;sound-producing objects;low-latency assumption;sparse representation coding framework;learned discriminative dictionaries;discriminative dictionary learning;impulsive events;acoustic stimulation;interactive applications;gestural control;sound synthesis;interactive systems;impact sound classification;nearest neighbour classifier;Dictionaries;Acoustics;Encoding;Training data;Training;Optimization;Matching pursuit algorithms;Impact sound classification;real-time processing;sparse representation classification;discriminative dictionary sparse coding},\n  doi = {10.23919/EUSIPCO.2019.8902965},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533781.pdf},\n}\n\n
\n
\n\n\n
\n Classification of impulsive events produced from the acoustic stimulation of everyday objects opens the door to exciting interactive applications, as for example, gestural control of sound synthesis. Such events may exhibit significant variability, which makes their recognition a very challenging task. Furthermore, the fact that interactive systems require an immediate response to achieve low latency in real-time scenarios, poses major constraints to be overcome. This paper focuses on the design of a novel method for identifying the sound-producing objects, as well as the location of impact of each event, under a low-latency assumption. To this end, a sparse representation coding framework is adopted based on learned discriminative dictionaries from short training and testing data. The performance of the proposed method is evaluated on a set of real impact sounds and compared against a nearest neighbour classifier. The experimental results demonstrate the high performance improvements of our proposed method, both in terms of classification accuracy and low latency.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Signal subspace change detection in structured covariance matrices.\n \n \n \n \n\n\n \n Abdallah, R. B.; Breloy, A.; Taylor, A.; El Korso, M. N.; and Lautru, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SignalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902966,\n  author = {R. B. Abdallah and A. Breloy and A. Taylor and M. N. {El Korso} and D. Lautru},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Signal subspace change detection in structured covariance matrices},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Testing common properties between covariance matrices is a relevant approach in a plethora of applications. In this paper, we derive a new statistical test in the context of structured covariance matrices. Specifically, we consider low rank signal component plus white Gaussian noise structure. Our aim is to test the equality of the principal subspace, i.e., subspace spanned by the principal eigenvectors of a group of covariance matrices. A decision statistic is derived using the generalized likelihood ratio test. As the formulation of the proposed test implies a non-trivial optimization problem, we derive an appropriate majorizationminimization algorithm. Finally, numerical simulations illustrate the properties of the newly proposed detector compared to the state of the art.},\n  keywords = {covariance matrices;eigenvalues and eigenfunctions;Gaussian noise;radar detection;signal detection;statistical testing;statistical test;structured covariance matrices;low rank signal component plus white Gaussian noise structure;principal subspace;generalized likelihood ratio test;signal subspace change detection;majorization-minimization algorithm;Testing;Signal processing algorithms;Maximum likelihood estimation;Covariance matrices;Detectors;Signal processing;Optimization;Generalized likelihood ratio test;subspace testing;low rank structure;majorization-minimization algorithm},\n  doi = {10.23919/EUSIPCO.2019.8902966},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533085.pdf},\n}\n\n
\n
\n\n\n
\n Testing common properties between covariance matrices is a relevant approach in a plethora of applications. In this paper, we derive a new statistical test in the context of structured covariance matrices. Specifically, we consider low rank signal component plus white Gaussian noise structure. Our aim is to test the equality of the principal subspace, i.e., subspace spanned by the principal eigenvectors of a group of covariance matrices. A decision statistic is derived using the generalized likelihood ratio test. As the formulation of the proposed test implies a non-trivial optimization problem, we derive an appropriate majorizationminimization algorithm. Finally, numerical simulations illustrate the properties of the newly proposed detector compared to the state of the art.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimation of Sensor Array Signal Model Parameters Using Factor Analysis.\n \n \n \n \n\n\n \n Koutrouvelis, A. I.; Hendriks, R. C.; Heusdens, R.; and Jensen, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EstimationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902967,\n  author = {A. I. Koutrouvelis and R. C. Hendriks and R. Heusdens and J. Jensen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Estimation of Sensor Array Signal Model Parameters Using Factor Analysis},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Factor analysis is a popular tool in multivariate statistics, applied in several areas of study such as psychology, economics, chemistry and signal processing. Given a set of observed random variables, factor analysis aims at explaining and analyzing the correlation between these random variables. This is done by finding a meaningful structural model representation for the correlation matrix of the observed random variables, and subsequently estimating the underlying model parameters. In this paper, we focus on factor analysis methods applied to a commonly used signal model for sensor arrays applications and use it to jointly estimate the underlying model parameters. In addition we discuss practical considerations of these methods.},\n  keywords = {array signal processing;correlation methods;matrix algebra;parameter estimation;random processes;sensor arrays;signal representation;statistical analysis;multivariate statistics;signal processing;observed random variables;factor analysis methods;signal model;structural model representation;sensor array signal model parameter estimation;correlation matrix;Random variables;Covariance matrices;Sensor arrays;Array signal processing;Eigenvalues and eigenfunctions;Estimation;Factor analysis;sensor array;signal processing},\n  doi = {10.23919/EUSIPCO.2019.8902967},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532529.pdf},\n}\n\n
\n
\n\n\n
\n Factor analysis is a popular tool in multivariate statistics, applied in several areas of study such as psychology, economics, chemistry and signal processing. Given a set of observed random variables, factor analysis aims at explaining and analyzing the correlation between these random variables. This is done by finding a meaningful structural model representation for the correlation matrix of the observed random variables, and subsequently estimating the underlying model parameters. In this paper, we focus on factor analysis methods applied to a commonly used signal model for sensor arrays applications and use it to jointly estimate the underlying model parameters. In addition we discuss practical considerations of these methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Image and Ontological Information Fusion for Cataract Surgery Recommendation.\n \n \n \n \n\n\n \n Galveia, J. N.; d. S. Cruz, L. A.; and Travassos, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ImagePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902968,\n  author = {J. N. Galveia and L. A. d. S. Cruz and A. Travassos},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Image and Ontological Information Fusion for Cataract Surgery Recommendation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Widely available digital ophthalmology data can be used to implement accurate Computer-Aided Diagnosis Systems. In this article we describe an automatic system which combines text clinical annotations, demographical information, as well as different types of ophthalmology image data to issue a recommendation for cataract surgery. Textual annotations are encoded using a standardized medical ontology nomenclature to enable higher level modeling. Image data is processed by convolutional neural networks to extract compact features. These two types of data together with demographical information are then inputted into a random forest classifier which then decides if surgery is recommended. The method proposed is evaluated on a real-life dataset, achieving accuracies and precisions around 90%. Several conclusions are drawn concerning the usefulness of the different input data types, used independently or combined.},\n  keywords = {convolutional neural nets;eye;feature extraction;image classification;image fusion;image segmentation;medical image processing;ontologies (artificial intelligence);random forests;surgery;text analysis;ontological information fusion;cataract surgery recommendation;computer-aided diagnosis systems;automatic system;text clinical annotations;demographical information;ophthalmology image data;textual annotations;standardized medical ontology nomenclature;level modeling;convolutional neural networks;compact features;random forest classifier;input data types;digital ophthalmology data;Radio frequency;Ontologies;Cataracts;Surgery;Annotations;Medical diagnostic imaging;Ophthalmology;Information Fusion;Multimodal Image;Ontology;Cataract Surgery},\n  doi = {10.23919/EUSIPCO.2019.8902968},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533578.pdf},\n}\n\n
\n
\n\n\n
\n Widely available digital ophthalmology data can be used to implement accurate Computer-Aided Diagnosis Systems. In this article we describe an automatic system which combines text clinical annotations, demographical information, as well as different types of ophthalmology image data to issue a recommendation for cataract surgery. Textual annotations are encoded using a standardized medical ontology nomenclature to enable higher level modeling. Image data is processed by convolutional neural networks to extract compact features. These two types of data together with demographical information are then inputted into a random forest classifier which then decides if surgery is recommended. The method proposed is evaluated on a real-life dataset, achieving accuracies and precisions around 90%. Several conclusions are drawn concerning the usefulness of the different input data types, used independently or combined.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Self-Tuned Architecture for Human Activity Recognition Based on a Dynamical Recurrence Analysis of Wearable Sensor Data.\n \n \n \n \n\n\n \n Zervou, M. . -.; Tzagkarakis, G.; Panousopoulou, A.; and Tsakalides, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902969,\n  author = {M. . -A. Zervou and G. Tzagkarakis and A. Panousopoulou and P. Tsakalides},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Self-Tuned Architecture for Human Activity Recognition Based on a Dynamical Recurrence Analysis of Wearable Sensor Data},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Human activity recognition (HAR) is encountered in a plethora of applications, such as pervasive health care systems and smart homes. The majority of existing HAR techniques employs features extracted from symbolic or frequency-domain representations of the associated data, whilst ignoring completely the behavior of the underlying data generating dynamical system. To address this problem, this work proposes a novel self-tuned architecture for feature extraction and activity recognition by modeling directly the inherent dynamics of wearable sensor data in higher-dimensional phase spaces, which encode state recurrences for each individual activity. Experimental evaluation on real data of leisure activities demonstrates an improved recognition accuracy of our method when compared against a state-of-the-art motif-based approach using symbolic representations.},\n  keywords = {feature extraction;health care;medical computing;patient monitoring;pattern recognition;sensor fusion;ubiquitous computing;self-tuned architecture;human activity recognition;dynamical recurrence analysis;wearable sensor data;pervasive health care systems;smart homes;HAR techniques;frequency-domain representations;feature extraction;encode state recurrences;leisure activities;motif-based approach;Feature extraction;Computer architecture;Activity recognition;Microsoft Windows;Time series analysis;Wearable sensors;Trajectory;Human activity recognition;recurrence quantification analysis;nonlinear data analysis;motif discovery;wearable sensors},\n  doi = {10.23919/EUSIPCO.2019.8902969},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529013.pdf},\n}\n\n
\n
\n\n\n
\n Human activity recognition (HAR) is encountered in a plethora of applications, such as pervasive health care systems and smart homes. The majority of existing HAR techniques employs features extracted from symbolic or frequency-domain representations of the associated data, whilst ignoring completely the behavior of the underlying data generating dynamical system. To address this problem, this work proposes a novel self-tuned architecture for feature extraction and activity recognition by modeling directly the inherent dynamics of wearable sensor data in higher-dimensional phase spaces, which encode state recurrences for each individual activity. Experimental evaluation on real data of leisure activities demonstrates an improved recognition accuracy of our method when compared against a state-of-the-art motif-based approach using symbolic representations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fractional Programming for Energy Efficient Power Control in Uplink Massive MIMO Systems.\n \n \n \n \n\n\n \n Kassaw, A.; Hailemariam, D.; Fauß, M.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FractionalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902970,\n  author = {A. Kassaw and D. Hailemariam and M. Fauß and A. M. Zoubir},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Fractional Programming for Energy Efficient Power Control in Uplink Massive MIMO Systems},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Recently, massive multiple input multiple output (MIMO) is considered as a promising technology to significantly improve the spectral efficiency (SE) and energy efficiency (EE) of fifth generation (5G) networks. Effective control of the transmit power and other network resources helps to maximize energy efficiency of massive MIMO systems. In this work, an energy efficient power control algorithm is proposed for uplink massive MIMO systems with zero forcing (ZF) detection and imperfect channel state information (CSI) at a base station (BS). By using large system analysis, we first derive closed-form lower bound spectral efficiency expression. Then, by utilizing methods from fractional programming theory, an energy efficient power control algorithm is derived. Numerical results validate the effectiveness of the proposed power control algorithm and show the impacts of maximum transmitter power and minimum rate constraints on energy efficiency maximization.},\n  keywords = {mathematical programming;MIMO communication;power control;radio transmitters;telecommunication control;wireless channels;fifth generation networks;energy efficient power control algorithm;uplink massive MIMO systems;closed-form lower bound spectral efficiency expression;fractional programming theory;maximum power transmission;energy efficiency maximization;massive multiple input multiple output systems;SE;spectral efficiency;energy efficiency;EE;5G networks;ZF detection;zero forcing detection;imperfect channel state information;CSI;BS;base station;large system analysis;BS;Massive MIMO;Channel estimation;Uplink;Power demand;Power control;Programming;Interference;Massive MIMO;Spectral Efficiency;Energy Efficiency;Fractional Programming Theory},\n  doi = {10.23919/EUSIPCO.2019.8902970},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533278.pdf},\n}\n\n
\n
\n\n\n
\n Recently, massive multiple input multiple output (MIMO) is considered as a promising technology to significantly improve the spectral efficiency (SE) and energy efficiency (EE) of fifth generation (5G) networks. Effective control of the transmit power and other network resources helps to maximize energy efficiency of massive MIMO systems. In this work, an energy efficient power control algorithm is proposed for uplink massive MIMO systems with zero forcing (ZF) detection and imperfect channel state information (CSI) at a base station (BS). By using large system analysis, we first derive closed-form lower bound spectral efficiency expression. Then, by utilizing methods from fractional programming theory, an energy efficient power control algorithm is derived. Numerical results validate the effectiveness of the proposed power control algorithm and show the impacts of maximum transmitter power and minimum rate constraints on energy efficiency maximization.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Generalizing Graph Convolutional Neural Networks with Edge-Variant Recursions on Graphs.\n \n \n \n \n\n\n \n Isufi, E.; Gama, F.; and Ribeiro, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"GeneralizingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902971,\n  author = {E. Isufi and F. Gama and A. Ribeiro},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Generalizing Graph Convolutional Neural Networks with Edge-Variant Recursions on Graphs},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper reviews graph convolutional neural networks (GCNNs) through the lens of edge-variant graph filters. The edge-variant graph filter is a finite order, linear, and local recursion that allows each node, in each iteration, to weigh differently the information of its neighbors. By exploiting this recursion, we put forth a general framework for GCNNs which considers state-of-the-art solutions as particular cases. This framework results useful to i) understand the tradeoff between local detail and the number of parameters of each solution and ii) provide guidelines for developing a myriad of novel approaches that can be implemented locally in the vertex domain. One of such approaches is presented here showing superior performance w.r.t. current alternatives in graph signal classification problems.},\n  keywords = {approximation theory;convolutional neural nets;filtering theory;graph theory;signal classification;local recursion;GCNNs;graph signal classification problems;graph convolutional neural networks;edge-variant recursions;edge-variant graph filter;vertex domain;Graph convolutional neural networks;graph signal processing;graph filters;edge-variant},\n  doi = {10.23919/EUSIPCO.2019.8902971},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533683.pdf},\n}\n\n
\n
\n\n\n
\n This paper reviews graph convolutional neural networks (GCNNs) through the lens of edge-variant graph filters. The edge-variant graph filter is a finite order, linear, and local recursion that allows each node, in each iteration, to weigh differently the information of its neighbors. By exploiting this recursion, we put forth a general framework for GCNNs which considers state-of-the-art solutions as particular cases. This framework results useful to i) understand the tradeoff between local detail and the number of parameters of each solution and ii) provide guidelines for developing a myriad of novel approaches that can be implemented locally in the vertex domain. One of such approaches is presented here showing superior performance w.r.t. current alternatives in graph signal classification problems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient Phase-Based Acoustic Tracking of Drones using a Microphone Array.\n \n \n \n \n\n\n \n Baggenstoss, P. M.; Springer, M.; Oispuu, M.; and Kurth, F.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902972,\n  author = {P. M. Baggenstoss and M. Springer and M. Oispuu and F. Kurth},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient Phase-Based Acoustic Tracking of Drones using a Microphone Array},\n  year = {2019},\n  pages = {1-5},\n  abstract = {An efficient phase-based acoustic detection and tracking algorithm for drones is presented. The algorithm separately tracks the time difference of arrival (TDOA) of the incoming signal with respect to microphone pairs based on the phase in the discrete Fourier transform (DFT) bins. The direction of arrival (DOA) of the drone is determined by forming a solution curve corresponding to each TDOA, then clustering the curve intersections. The algorithm avoids the computationally expensive grid-search over DOA, so is significantly more efficient than beamforming. The proposed algorithm and the maximum likelihood (ML) processor (beamformer) are compared in simulated and real data scenarios. Using simulated data, the ML estimator is shown to agree with the Cramer-Rao lower bound (CRLB) and the proposed algorithm is shown to approach the performance of ML at higher SNR. In real data scenarios, the phase-based algorithm implemented with simple alpha-beta TDOA trackers consistently tracked the target through difficult maneuvers at short and long range, showing no degradation with respect to the beamformer. Other potential advantages include robustness against interference and ability to create phase-based spectrograms for classification.},\n  keywords = {acoustic signal detection;array signal processing;direction-of-arrival estimation;discrete Fourier transforms;iterative methods;maximum likelihood estimation;microphone arrays;microphones;target tracking;time-of-arrival estimation;drone;microphone pairs;curve intersections;computationally expensive grid-search;beamforming;maximum likelihood processor;data scenarios;phase-based algorithm;simple alpha-beta TDOA trackers;phase-based spectrograms;microphone array;tracking algorithm;phase-based acoustic tracking;phase-based acoustic detection;Microphones;Drones;Discrete Fourier transforms;Direction-of-arrival estimation;Maximum likelihood estimation;Target tracking;Sensors},\n  doi = {10.23919/EUSIPCO.2019.8902972},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529858.pdf},\n}\n\n
\n
\n\n\n
\n An efficient phase-based acoustic detection and tracking algorithm for drones is presented. The algorithm separately tracks the time difference of arrival (TDOA) of the incoming signal with respect to microphone pairs based on the phase in the discrete Fourier transform (DFT) bins. The direction of arrival (DOA) of the drone is determined by forming a solution curve corresponding to each TDOA, then clustering the curve intersections. The algorithm avoids the computationally expensive grid-search over DOA, so is significantly more efficient than beamforming. The proposed algorithm and the maximum likelihood (ML) processor (beamformer) are compared in simulated and real data scenarios. Using simulated data, the ML estimator is shown to agree with the Cramer-Rao lower bound (CRLB) and the proposed algorithm is shown to approach the performance of ML at higher SNR. In real data scenarios, the phase-based algorithm implemented with simple alpha-beta TDOA trackers consistently tracked the target through difficult maneuvers at short and long range, showing no degradation with respect to the beamformer. Other potential advantages include robustness against interference and ability to create phase-based spectrograms for classification.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Deep Convolutional and LSTM Neural Network Architectures on Leap Motion Hand Tracking Data Sequences.\n \n \n \n \n\n\n \n Kritsis, K.; Kaliakatsos-Papakostas, M.; Katsouros, V.; and Pikrakis, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DeepPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902973,\n  author = {K. Kritsis and M. Kaliakatsos-Papakostas and V. Katsouros and A. Pikrakis},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Deep Convolutional and LSTM Neural Network Architectures on Leap Motion Hand Tracking Data Sequences},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper focuses on the hand gesture recognition problem, in which input is a multidimensional time series signal acquired from a Leap Motion Sensor and output is a predefined set of gestures. In the present work, we propose the adoption of Convolutional Neural Networks (CNNs), either in combination with a Long Short-Term Memory (LSTM) neural network (i.e. CNN-LSTM), or standalone in a deep architecture (i.e. dCNN) to automate feature learning and classification from the raw input data. The learned features are considered as the higher level abstract representation of low level raw time series signals and are employed in a unified supervised learning and classification model. The proposed CNN-LSTM and deep CNN models demonstrate recognition rates of 94% on the Leap Motion Hand Gestures for Interaction with 3D Virtual Music Instruments dataset, which outperforms previously proposed models of handcrafted and automated learned features on LSTM networks.},\n  keywords = {convolutional neural nets;feature extraction;gesture recognition;image classification;learning (artificial intelligence);recurrent neural nets;time series;LSTM neural network architectures;hand gesture recognition problem;multidimensional time series signal;convolutional neural networks;long short-term memory neural network;CNN-LSTM;deep architecture;feature learning;raw input data;higher level abstract representation;low level raw time series signals;unified supervised learning;classification model;deep CNN models;recognition rates;LSTM networks;leap motion hand gestures;leap motion hand tracking data;Instruments;Convolution;Three-dimensional displays;Music;Training;Feature extraction;Neural networks;gesture recognition;3D musical instrument interaction;CNN;LSTM;CNN-LSTM models},\n  doi = {10.23919/EUSIPCO.2019.8902973},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533762.pdf},\n}\n\n
\n
\n\n\n
\n This paper focuses on the hand gesture recognition problem, in which input is a multidimensional time series signal acquired from a Leap Motion Sensor and output is a predefined set of gestures. In the present work, we propose the adoption of Convolutional Neural Networks (CNNs), either in combination with a Long Short-Term Memory (LSTM) neural network (i.e. CNN-LSTM), or standalone in a deep architecture (i.e. dCNN) to automate feature learning and classification from the raw input data. The learned features are considered as the higher level abstract representation of low level raw time series signals and are employed in a unified supervised learning and classification model. The proposed CNN-LSTM and deep CNN models demonstrate recognition rates of 94% on the Leap Motion Hand Gestures for Interaction with 3D Virtual Music Instruments dataset, which outperforms previously proposed models of handcrafted and automated learned features on LSTM networks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Comparison of Parameter Estimation Methods for Single-Microphone Multi -Frame Wiener Filtering.\n \n \n \n\n\n \n Fischer, D.; Brümann, K.; and Doclo, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902974,\n  author = {D. Fischer and K. Brümann and S. Doclo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Comparison of Parameter Estimation Methods for Single-Microphone Multi -Frame Wiener Filtering},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The multi-frame Wiener filter (MFWF) for single-microphone speech enhancement is able to exploit speech correlation across consecutive time-frames in the short-time Fourier transform (STFT) domain. To achieve a high speech correlation, typically an STFT with a high time-resolution but a low frequency-resolution is applied. The MFWF can be decomposed into a multi-frame minimum power distortionless response (MFMPDR) filter and a single-frame Wiener postfilter. To implement the MFWF using this decomposition, estimates of several parameters are required, namely the speech correlation vector, the noisy speech correlation matrix, and the power spectral densities at the output of the MFMPDR filter. Correlations can be estimated either directly in the low frequency-resolution STFT filterbank, indirectly by estimating periodograms in a high frequency-resolution filterbank and applying the Wiener-Khinchin theorem, or in a combined way. In this paper, we compare the performance of different estimators for the required parameters. Experimental results for different speech material, noise conditions, and signal-to-noise ratios show that using a combined estimator for the speech correlation vector yields the best results in terms of speech quality compared to existing direct and indirect estimators.},\n  keywords = {array signal processing;correlation methods;filtering theory;Fourier transforms;microphones;noise;parameter estimation;spectral analysis;speech enhancement;Wiener filters;MFWF;speech correlation vector;noisy speech correlation matrix;power spectral densities;MFMPDR filter;low frequency-resolution STFT filterbank;high frequency-resolution filterbank;Wiener-Khinchin theorem;different estimators;required parameters;different speech material;combined estimator;speech quality;existing direct estimators;indirect estimators;parameter estimation methods;single-microphone multi-frame Wiener;multiframe Wiener filter;single-microphone speech enhancement;consecutive time-frames;high speech correlation;high time-resolution;multiframe minimum power distortionless response filter;single-frame Wiener postfilter;Correlation;Noise measurement;Frequency estimation;Time-frequency analysis;Matrix decomposition;Speech processing;Europe},\n  doi = {10.23919/EUSIPCO.2019.8902974},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The multi-frame Wiener filter (MFWF) for single-microphone speech enhancement is able to exploit speech correlation across consecutive time-frames in the short-time Fourier transform (STFT) domain. To achieve a high speech correlation, typically an STFT with a high time-resolution but a low frequency-resolution is applied. The MFWF can be decomposed into a multi-frame minimum power distortionless response (MFMPDR) filter and a single-frame Wiener postfilter. To implement the MFWF using this decomposition, estimates of several parameters are required, namely the speech correlation vector, the noisy speech correlation matrix, and the power spectral densities at the output of the MFMPDR filter. Correlations can be estimated either directly in the low frequency-resolution STFT filterbank, indirectly by estimating periodograms in a high frequency-resolution filterbank and applying the Wiener-Khinchin theorem, or in a combined way. In this paper, we compare the performance of different estimators for the required parameters. Experimental results for different speech material, noise conditions, and signal-to-noise ratios show that using a combined estimator for the speech correlation vector yields the best results in terms of speech quality compared to existing direct and indirect estimators.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n NAViDAd: A No-Reference Audio-Visual Quality Metric Based on a Deep Autoencoder.\n \n \n \n \n\n\n \n Martinez, H.; Farias, M. C. Q.; and Hines, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"NAViDAd:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902975,\n  author = {H. Martinez and M. C. Q. Farias and A. Hines},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {NAViDAd: A No-Reference Audio-Visual Quality Metric Based on a Deep Autoencoder},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The development of models for quality prediction of both audio and video signals is a fairly mature field. But, although several multimodal models have been proposed, the area of audio-visual quality prediction is still an emerging area. In fact, despite the reasonable performance obtained by combination and parametric metrics, currently there is no reliable pixel-based audio-visual quality metric. The approach presented in this work is based on the assumption that autoencoders, fed with descriptive audio and video features, might produce a set of features that is able to describe the complex audio and video interactions. Based on this hypothesis, we propose a No-Reference AudioVisual Quality Metric Based on a Deep Autoencoder (NAViDAd). The model visual features are natural scene statistics (NSS) and spatial-temporal measures of the video component. Meanwhile, the audio features are obtained by computing the spectrogram representation of the audio component. The model is formed by a 2-layer framework that includes a deep autoencoder layer and a classification layer. These two layers are stacked and trained to build the deep neural network model. The model is trained and tested using a large set of stimuli, containing representative audio and video artifacts. The model performed well when tested against the UnB-AV and the LiveNetflix-II databases.},\n  keywords = {audio signal processing;audio-visual systems;feature extraction;learning (artificial intelligence);natural scenes;neural nets;video signal processing;NAViDAd;No-Reference Audio-Visual Quality Metric;video signals;fairly mature field;multimodal models;audio-visual quality prediction;parametric metrics;reliable pixel-based audio-visual quality metric;autoencoders;descriptive audio;video features;complex audio;video interactions;No-Reference AudioVisual Quality Metric;model visual features;video component;audio features;audio component;deep autoencoder layer;deep neural network model;representative audio;video artifacts;audio-visual;quality metrics;no-reference;distortions;autoencoder;NAViDAd},\n  doi = {10.23919/EUSIPCO.2019.8902975},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533455.pdf},\n}\n\n
\n
\n\n\n
\n The development of models for quality prediction of both audio and video signals is a fairly mature field. But, although several multimodal models have been proposed, the area of audio-visual quality prediction is still an emerging area. In fact, despite the reasonable performance obtained by combination and parametric metrics, currently there is no reliable pixel-based audio-visual quality metric. The approach presented in this work is based on the assumption that autoencoders, fed with descriptive audio and video features, might produce a set of features that is able to describe the complex audio and video interactions. Based on this hypothesis, we propose a No-Reference AudioVisual Quality Metric Based on a Deep Autoencoder (NAViDAd). The model visual features are natural scene statistics (NSS) and spatial-temporal measures of the video component. Meanwhile, the audio features are obtained by computing the spectrogram representation of the audio component. The model is formed by a 2-layer framework that includes a deep autoencoder layer and a classification layer. These two layers are stacked and trained to build the deep neural network model. The model is trained and tested using a large set of stimuli, containing representative audio and video artifacts. The model performed well when tested against the UnB-AV and the LiveNetflix-II databases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Tensor Network Kalman Filter for LTI Systems.\n \n \n \n \n\n\n \n Gedon, D.; Piscaer, P.; Batselier, K.; Smith, C.; and Verhaegen, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TensorPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902976,\n  author = {D. Gedon and P. Piscaer and K. Batselier and C. Smith and M. Verhaegen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Tensor Network Kalman Filter for LTI Systems},\n  year = {2019},\n  pages = {1-5},\n  abstract = {An extension of the Tensor Network (TN) Kalman filter [2], [3] for large scale LTI systems is presented in this paper. The TN Kalman filter can handle exponentially large state vectors without constructing them explicitly. In order to have efficient algebraic operations, a low TN rank is required. We exploit the possibility to approximate the covariance matrix as a TN with a low TN rank. This reduces the computational complexity for general SISO and MIMO LTI systems with TN rank greater than one significantly while obtaining an accurate estimation. Improvements of this method in terms of computational complexity compared to the conventional Kalman filter are demonstrated in numerical simulations for large scale systems.},\n  keywords = {computational complexity;covariance matrices;Kalman filters;large-scale systems;linear systems;MIMO systems;tensors;tensor network Kalman filter;large scale LTI systems;TN Kalman filter;state vectors;algebraic operations;low TN rank;computational complexity;conventional Kalman filter;SISO;MIMO LTI systems;Tensors;Kalman filters;Linear systems;MIMO communication;Computational complexity;System dynamics;Matrix decomposition;Kalman filter;LTI systems;tensors;tensor train;large scale systems;SISO;MIMO;curse of dimensionality},\n  doi = {10.23919/EUSIPCO.2019.8902976},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529297.pdf},\n}\n\n
\n
\n\n\n
\n An extension of the Tensor Network (TN) Kalman filter [2], [3] for large scale LTI systems is presented in this paper. The TN Kalman filter can handle exponentially large state vectors without constructing them explicitly. In order to have efficient algebraic operations, a low TN rank is required. We exploit the possibility to approximate the covariance matrix as a TN with a low TN rank. This reduces the computational complexity for general SISO and MIMO LTI systems with TN rank greater than one significantly while obtaining an accurate estimation. Improvements of this method in terms of computational complexity compared to the conventional Kalman filter are demonstrated in numerical simulations for large scale systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Deep Transfer Learning for Single-Channel Automatic Sleep Staging with Channel Mismatch.\n \n \n \n \n\n\n \n Phan, H.; Chén, O. Y.; Koch, P.; Mertins, A.; and Vos, M. D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DeepPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902977,\n  author = {H. Phan and O. Y. Chén and P. Koch and A. Mertins and M. D. Vos},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Deep Transfer Learning for Single-Channel Automatic Sleep Staging with Channel Mismatch},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Many sleep studies suffer from the problem of insufficient data to fully utilize deep neural networks as different labs use different recordings set ups, leading to the need of training automated algorithms on rather small databases, whereas large annotated databases are around but cannot be directly included into these studies for data compensation due to channel mismatch. This work presents a deep transfer learning approach to overcome the channel mismatch problem and transfer knowledge from a large dataset to a small cohort to study automatic sleep staging with single-channel input. We employ the state-of-the-art SeqSleepNet and train the network in the source domain, i.e. the large dataset. Afterwards, the pretrained network is finetuned in the target domain, i.e. the small cohort, to complete knowledge transfer. We study two transfer learning scenarios with slight and heavy channel mismatch between the source and target domains. We also investigate whether, and if so, how finetuning entirely or partially the pretrained network would affect the performance of sleep staging on the target domain. Using the Montreal Archive of Sleep Studies (MASS) database consisting of 200 subjects as the source domain and the Sleep-EDF Expanded database consisting of 20 subjects as the target domain in this study, our experimental results show significant performance improvement on sleep staging achieved with the proposed deep transfer learning approach. Furthermore, these results also reveal the essential of finetuning the feature-learning parts of the pretrained network to be able to bypass the channel mismatch problem.},\n  keywords = {learning (artificial intelligence);medical computing;neural nets;sleep;knowledge transfer;slight channel mismatch;heavy channel mismatch;feature-learning parts;channel mismatch problem;deep neural networks;annotated databases;data compensation;single-channel automatic sleep staging;deep transfer learning;sleep-EDF expanded database;Sleep;Databases;Electroencephalography;Electrooculography;Brain modeling;Signal processing;Neural networks;Automatic sleep staging;deep learning;transfer learning;SeqSleepNet},\n  doi = {10.23919/EUSIPCO.2019.8902977},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533805.pdf},\n}\n\n
\n
\n\n\n
\n Many sleep studies suffer from the problem of insufficient data to fully utilize deep neural networks as different labs use different recordings set ups, leading to the need of training automated algorithms on rather small databases, whereas large annotated databases are around but cannot be directly included into these studies for data compensation due to channel mismatch. This work presents a deep transfer learning approach to overcome the channel mismatch problem and transfer knowledge from a large dataset to a small cohort to study automatic sleep staging with single-channel input. We employ the state-of-the-art SeqSleepNet and train the network in the source domain, i.e. the large dataset. Afterwards, the pretrained network is finetuned in the target domain, i.e. the small cohort, to complete knowledge transfer. We study two transfer learning scenarios with slight and heavy channel mismatch between the source and target domains. We also investigate whether, and if so, how finetuning entirely or partially the pretrained network would affect the performance of sleep staging on the target domain. Using the Montreal Archive of Sleep Studies (MASS) database consisting of 200 subjects as the source domain and the Sleep-EDF Expanded database consisting of 20 subjects as the target domain in this study, our experimental results show significant performance improvement on sleep staging achieved with the proposed deep transfer learning approach. Furthermore, these results also reveal the essential of finetuning the feature-learning parts of the pretrained network to be able to bypass the channel mismatch problem.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improving Energy Disaggregation Performance Using Appliance-Driven Sampling Rates.\n \n \n \n \n\n\n \n Schirmer, P. A.; and Mporas, I.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902978,\n  author = {P. A. Schirmer and I. Mporas},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Improving Energy Disaggregation Performance Using Appliance-Driven Sampling Rates},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper proposes a new appliance-driven selection of sampling frequencies for improving the energy disaggregation performance in non-intrusive load monitoring. Specifically, the methodology uses a machine learning model with parallel device detectors and optimized device dependent sampling rates in order to improve device identification. The performance of the proposed methodology was evaluated on a state-of-the-art baseline system and a set of publicly available databases increasing performance up to 6.7% in terms of estimation accuracy when compared to the baseline energy disaggregation setup without device dependent sampling rates.},\n  keywords = {domestic appliances;learning (artificial intelligence);power engineering computing;power system measurement;state-of-the-art baseline system;baseline energy disaggregation setup;energy disaggregation performance;appliance-driven sampling rates;appliance-driven selection;sampling frequencies;nonintrusive load monitoring;machine learning model;parallel device detectors;optimized device dependent sampling rates;device identification;Databases;Performance evaluation;Power demand;Feature extraction;Training;Optimized production technology;Signal processing;Non-intrusive load monitoring (NILM);Energy Disaggregation;Device Classification.},\n  doi = {10.23919/EUSIPCO.2019.8902978},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533379.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a new appliance-driven selection of sampling frequencies for improving the energy disaggregation performance in non-intrusive load monitoring. Specifically, the methodology uses a machine learning model with parallel device detectors and optimized device dependent sampling rates in order to improve device identification. The performance of the proposed methodology was evaluated on a state-of-the-art baseline system and a set of publicly available databases increasing performance up to 6.7% in terms of estimation accuracy when compared to the baseline energy disaggregation setup without device dependent sampling rates.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Tensor Factorisation and Transfer Learning for Sleep Pose Detection.\n \n \n \n \n\n\n \n Mohammadi, S. M.; Kouchaki, S.; Sanei, S.; Dijk, D. -.; Hilton, A.; and Wells, K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TensorPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902979,\n  author = {S. M. Mohammadi and S. Kouchaki and S. Sanei and D. -J. Dijk and A. Hilton and K. Wells},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Tensor Factorisation and Transfer Learning for Sleep Pose Detection},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this study, a novel hybrid tensor factorisation and deep learning approach has been proposed and implemented for sleep pose identification and classification of twelve different sleep postures. We have applied tensor factorisation to infrared (IR) images of 10 subjects to extract group-level data patterns, undertake dimensionality reduction and reduce occlusion for IR images. Pre-trained VGG-19 neural network has been used to predict the sleep poses under the blanket. Finally, we compared our results with those without the factorisation stage and with CNN network. Our new pose detection method outperformed the methods solely based on VGG-19 and 4-layer CNN network. The average accuracy for 10 volunteers increased from 78.1% and 75.4% to 86.0%.},\n  keywords = {convolutional neural nets;feature extraction;image classification;learning (artificial intelligence);matrix decomposition;pose estimation;sleep;tensors;transfer learning;novel hybrid tensor factorisation;deep learning approach;infrared images;sleep posture classification;IR images;pre-trained VGG-19 neural network;factorisation stage;4-layer CNN network;sleep pose identification;dimensionality reduction;occlusion reduction;group-level data pattern extraction;sleep pose detection method;Tensors;Sleep;Monitoring;Feature extraction;Training;Task analysis;Europe;Tucker decomposition;Transfer learning;Sleep;Pose identification},\n  doi = {10.23919/EUSIPCO.2019.8902979},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533963.pdf},\n}\n\n
\n
\n\n\n
\n In this study, a novel hybrid tensor factorisation and deep learning approach has been proposed and implemented for sleep pose identification and classification of twelve different sleep postures. We have applied tensor factorisation to infrared (IR) images of 10 subjects to extract group-level data patterns, undertake dimensionality reduction and reduce occlusion for IR images. Pre-trained VGG-19 neural network has been used to predict the sleep poses under the blanket. Finally, we compared our results with those without the factorisation stage and with CNN network. Our new pose detection method outperformed the methods solely based on VGG-19 and 4-layer CNN network. The average accuracy for 10 volunteers increased from 78.1% and 75.4% to 86.0%.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Data Augmentation for Drum Transcription with Convolutional Neural Networks.\n \n \n \n \n\n\n \n Jacques, C.; and Roebel, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DataPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902980,\n  author = {C. Jacques and A. Roebel},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Data Augmentation for Drum Transcription with Convolutional Neural Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {A recurrent issue in deep learning is the scarcity of data, in particular precisely annotated data. Few publicly available databases are correctly annotated and generating correct labels is very time consuming. The present article investigates into data augmentation strategies for Neural Networks training, particularly for tasks related to drum transcription. These tasks need very precise annotations. This article investigates state-of the-art sound transformation algorithms for remixing noise and sinusoidal parts, remixing attacks, transposing with and without time compensation and compares them to basic regularization methods such as using dropout and additive Gaussian noise. And it shows how a drum transcription algorithm based on CNN benefits from the proposed data augmentation strategy.},\n  keywords = {audio signal processing;convolutional neural nets;Gaussian noise;learning (artificial intelligence);musical instruments;time compensation;drum transcription algorithm;data augmentation strategy;convolutional neural networks;deep learning;particular precisely annotated data;publicly available databases;correct labels;article investigates;neural networks training;precise annotations;sinusoidal parts;state-of the-art sound transformation algorithms;Databases;Training;Instruments;Spectrogram;Task analysis;Network topology;Transient analysis;data augmentation;deep learning;drum transcription;convolutional neural network CNN},\n  doi = {10.23919/EUSIPCO.2019.8902980},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533806.pdf},\n}\n\n
\n
\n\n\n
\n A recurrent issue in deep learning is the scarcity of data, in particular precisely annotated data. Few publicly available databases are correctly annotated and generating correct labels is very time consuming. The present article investigates into data augmentation strategies for Neural Networks training, particularly for tasks related to drum transcription. These tasks need very precise annotations. This article investigates state-of the-art sound transformation algorithms for remixing noise and sinusoidal parts, remixing attacks, transposing with and without time compensation and compares them to basic regularization methods such as using dropout and additive Gaussian noise. And it shows how a drum transcription algorithm based on CNN benefits from the proposed data augmentation strategy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Front-End Feature Compensation for Noise Robust Speech Emotion Recognition.\n \n \n \n \n\n\n \n Pandharipande, M.; Chakraborty, R.; Panda, A.; Das, B.; and Kopparapu, S. K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Front-EndPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902981,\n  author = {M. Pandharipande and R. Chakraborty and A. Panda and B. Das and S. K. Kopparapu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Front-End Feature Compensation for Noise Robust Speech Emotion Recognition},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Robust feature compensation and selection are important aspects of noisy speech emotion recognition (SER) task, especially in mismatched condition, when the models are trained on clean speech and tested in the noisy scenarios. Here we propose the use of front-end feature compensation techniques based on Vector Taylor Series (VTS) expansion and VTS with auditory masking (VTS-AM) to improve the performance of SER systems. On top of VTS and VTS-AM, we compare the performances of log-compression and root-compression to the mel-filter-bank energies. Further, we demonstrate the benefit of feature selection applied to the non-MFCC high-level descriptors in conjunction with VTS, VTS-AM and root compression. The system performance is compared with popular Non-negative Matrix Factorization (NMF) based enhancement and energy based voice activity detector (VAD) technique, which discards silence or noisy frames in the spoken utterances. To demonstrate the efficacy of our proposed techniques, extensive experiments are conducted on 2 standard datasets (EmoDB and IEMOCAP), contaminated with 5 types of noise (Babble, F-16, Factory, Volvo, and HF-channel) from the Noisex -92 noise database at 5 SNR levels (0dB, 5dB, 10dB, 15dB and 20dB).},\n  keywords = {emotion recognition;feature extraction;matrix decomposition;signal classification;speech recognition;VTS;root-compression;mel-filter-bank energies;feature selection;system performance;noise robust speech emotion recognition;noisy speech emotion recognition task;clean speech;Vector Taylor Series expansion;SER systems;front-end feature compensation;non-MFCC high-level descriptors;energy based voice activity detector technique;spoken utterances;EmoDB;IEMOCAP;Babble;F-16;Factory;Volvo;HF-channel;Noisex-92 noise database;Feature extraction;Noise measurement;Speech recognition;Emotion recognition;Mel frequency cepstral coefficient;Databases;Psychoacoustic models;Emotion recognition;Noisy speech;Feature compensation;Auditory masking;Vector Taylor Series},\n  doi = {10.23919/EUSIPCO.2019.8902981},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533601.pdf},\n}\n\n
\n
\n\n\n
\n Robust feature compensation and selection are important aspects of noisy speech emotion recognition (SER) task, especially in mismatched condition, when the models are trained on clean speech and tested in the noisy scenarios. Here we propose the use of front-end feature compensation techniques based on Vector Taylor Series (VTS) expansion and VTS with auditory masking (VTS-AM) to improve the performance of SER systems. On top of VTS and VTS-AM, we compare the performances of log-compression and root-compression to the mel-filter-bank energies. Further, we demonstrate the benefit of feature selection applied to the non-MFCC high-level descriptors in conjunction with VTS, VTS-AM and root compression. The system performance is compared with popular Non-negative Matrix Factorization (NMF) based enhancement and energy based voice activity detector (VAD) technique, which discards silence or noisy frames in the spoken utterances. To demonstrate the efficacy of our proposed techniques, extensive experiments are conducted on 2 standard datasets (EmoDB and IEMOCAP), contaminated with 5 types of noise (Babble, F-16, Factory, Volvo, and HF-channel) from the Noisex -92 noise database at 5 SNR levels (0dB, 5dB, 10dB, 15dB and 20dB).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-Complexity Hybrid Transceivers for Uplink Multiuser mmWave MIMO by User Clustering.\n \n \n \n \n\n\n \n Pérez-Adán, D.; González-Coma, J. P.; Fresnedo, Ó.; and Castedo, L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Low-ComplexityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902982,\n  author = {D. Pérez-Adán and J. P. González-Coma and Ó. Fresnedo and L. Castedo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Low-Complexity Hybrid Transceivers for Uplink Multiuser mmWave MIMO by User Clustering},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The high cost and power consumption of millimeter wave (mmWave) radio frequency (RF) hardware elements demands for advanced signal processing techniques to design MIMO transceivers. Hybrid analog-digital architectures for MIMO transceivers have become an attractive strategy to reduce the number of RF chains of the transceivers. In the uplink of a multiuser mmWave MIMO system, this hardware reduction is limited by the number of users to be handled, which can be rather large. In this work, we propose to use distributed quantizer linear coding (DQLC) in order to superimpose correlated sources in a cluster-based uplink multiuser mmWave MIMO system and reduce the number of RF chains at the receiver. The scheduling policy to cluster the transmitters is also analyzed by considering the correlation of the sources and channel state information (CSI). Numerical results show the advantage of employing the proposed scheme to reduce hardware complexity.},\n  keywords = {linear codes;millimetre wave communication;MIMO communication;multi-access systems;pattern clustering;radio transceivers;signal processing;telecommunication scheduling;hardware reduction;distributed quantizer linear coding;RF chains;hardware complexity;low-complexity hybrid transceivers;user clustering;power consumption;MIMO transceiver design;hybrid analog-digital architectures;millimeter wave radio frequency hardware element;advanced signal processing techniques;DQLC;correlated sources;cluster-based uplink multiuser mmWave MIMO system;CSI;transmitters;receiver;Radio frequency;MIMO communication;Receivers;Transceivers;Uplink;Precoding;Millimeter wave;joint source-channel coding;uplink;source correlation;multiuser.},\n  doi = {10.23919/EUSIPCO.2019.8902982},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533902.pdf},\n}\n\n
\n
\n\n\n
\n The high cost and power consumption of millimeter wave (mmWave) radio frequency (RF) hardware elements demands for advanced signal processing techniques to design MIMO transceivers. Hybrid analog-digital architectures for MIMO transceivers have become an attractive strategy to reduce the number of RF chains of the transceivers. In the uplink of a multiuser mmWave MIMO system, this hardware reduction is limited by the number of users to be handled, which can be rather large. In this work, we propose to use distributed quantizer linear coding (DQLC) in order to superimpose correlated sources in a cluster-based uplink multiuser mmWave MIMO system and reduce the number of RF chains at the receiver. The scheduling policy to cluster the transmitters is also analyzed by considering the correlation of the sources and channel state information (CSI). Numerical results show the advantage of employing the proposed scheme to reduce hardware complexity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectral-Spatial Classification of Hyperspectral Images Using CNNs and Approximate Sparse Multinomial Logistic Regression.\n \n \n \n \n\n\n \n Kutluk, S.; Kayabol, K.; and Akan, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Spectral-SpatialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902983,\n  author = {S. Kutluk and K. Kayabol and A. Akan},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Spectral-Spatial Classification of Hyperspectral Images Using CNNs and Approximate Sparse Multinomial Logistic Regression},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We propose a technique for training convolutional neural networks (CNNs) in which the convolutional layers are trained using a gradient descent based method and the classification layer is trained using a second order method called approximate sparse multinomial logistic regression (ASMLR) which also provides a spatial smoothing procedure that increases the classification accuracy for hyperspectral images. ASMLR performs well on hyperspectral images, and CNNs are known to give good results in many applications such as image classification and object recognition. Thus, the proposed technique allows us to improve the performance of CNNs by training the whole network with an end-to-end framework. This approach takes advantage of convolutional layers for spectral feature extraction, and of the softmax classification layer for feature selection with sparsity constraints, and an intrinsic learning rate adjustment mechanism. In classification, we also use a spatial smoothing method. The proposed method was evaluated on two hyperspectral images for spectral-spatial land cover classification, and the results have shown that it outperforms the CNN and the ASMLR classifiers when they are used separately.},\n  keywords = {convolutional neural nets;feature extraction;geophysical image processing;gradient methods;image classification;land cover;learning (artificial intelligence);object recognition;regression analysis;smoothing methods;classification accuracy;hyperspectral images;ASMLR;CNNs;image classification;convolutional layers;spectral feature extraction;softmax classification layer;spatial smoothing method;spectral-spatial land cover classification;approximate sparse multinomial logistic regression;training convolutional neural networks;gradient descent based method;second order method;spatial smoothing procedure;Training;Hyperspectral imaging;Feature extraction;Convolution;Convolutional neural networks;Smoothing methods;hyperspectral image classification;remote sensing;deep learning;convolutional neural networks;logistic regression},\n  doi = {10.23919/EUSIPCO.2019.8902983},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533478.pdf},\n}\n\n
\n
\n\n\n
\n We propose a technique for training convolutional neural networks (CNNs) in which the convolutional layers are trained using a gradient descent based method and the classification layer is trained using a second order method called approximate sparse multinomial logistic regression (ASMLR) which also provides a spatial smoothing procedure that increases the classification accuracy for hyperspectral images. ASMLR performs well on hyperspectral images, and CNNs are known to give good results in many applications such as image classification and object recognition. Thus, the proposed technique allows us to improve the performance of CNNs by training the whole network with an end-to-end framework. This approach takes advantage of convolutional layers for spectral feature extraction, and of the softmax classification layer for feature selection with sparsity constraints, and an intrinsic learning rate adjustment mechanism. In classification, we also use a spatial smoothing method. The proposed method was evaluated on two hyperspectral images for spectral-spatial land cover classification, and the results have shown that it outperforms the CNN and the ASMLR classifiers when they are used separately.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-point Connectivity for Reliable mmWave.\n \n \n \n \n\n\n \n Kumar, D.; Kaleva, J.; and Tölli, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-pointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902984,\n  author = {D. Kumar and J. Kaleva and A. Tölli},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-point Connectivity for Reliable mmWave},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The fundamental challenge for mmWave communication is the sensitivity of mmWave signals to the blockage, which gives rise to unstable connectivity and impacts the reliability of a system. In this paper, we explore the viability of using coordinated multi-point connectivity, which facilitates multi-user precoding across spatially distributed transmitters, to ensure reliable connectivity even if one or more dominant links are under blockage. We provide successive convex approximation based algorithms for weighted sum-rate maximization and minimum user-rate maximization problems while considering the effects of random link blockage. For the downlink precoder design, a conservative estimate of the available rate per user is computed over all subset combinations of potentially blocked links. The trade-off between achievable sum-rate and reliable connectivity is illustrated via numerical examples. In the presence of random link blockage, the outage performance and effective throughput of the proposed transmit precoder design significantly outperforms several baseline scenarios, and results in more stable connectivity for highly reliable communication.},\n  keywords = {approximation theory;channel coding;convex programming;MIMO communication;optimisation;precoding;telecommunication network reliability;wireless channels;mmWave communication;mmWave signals;unstable connectivity;coordinated multipoint connectivity;multiuser precoding;spatially distributed transmitters;reliable connectivity;dominant links;successive convex approximation;weighted sum-rate maximization;minimum user-rate maximization problems;random link blockage;downlink precoder design;available rate;potentially blocked links;achievable sum-rate;transmit precoder design;stable connectivity;highly reliable communication;Interference;Signal to noise ratio;Downlink;Transmitters;Reliability engineering;Receivers},\n  doi = {10.23919/EUSIPCO.2019.8902984},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534119.pdf},\n}\n\n
\n
\n\n\n
\n The fundamental challenge for mmWave communication is the sensitivity of mmWave signals to the blockage, which gives rise to unstable connectivity and impacts the reliability of a system. In this paper, we explore the viability of using coordinated multi-point connectivity, which facilitates multi-user precoding across spatially distributed transmitters, to ensure reliable connectivity even if one or more dominant links are under blockage. We provide successive convex approximation based algorithms for weighted sum-rate maximization and minimum user-rate maximization problems while considering the effects of random link blockage. For the downlink precoder design, a conservative estimate of the available rate per user is computed over all subset combinations of potentially blocked links. The trade-off between achievable sum-rate and reliable connectivity is illustrated via numerical examples. In the presence of random link blockage, the outage performance and effective throughput of the proposed transmit precoder design significantly outperforms several baseline scenarios, and results in more stable connectivity for highly reliable communication.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Replay Attack Detection Using Generalized Cross-Correlation of Stereo Signal.\n \n \n \n \n\n\n \n Yaguchi, R.; Shiota, S.; Ono, N.; and Kiya, H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ReplayPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902985,\n  author = {R. Yaguchi and S. Shiota and N. Ono and H. Kiya},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Replay Attack Detection Using Generalized Cross-Correlation of Stereo Signal},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we propose a replay attack detection method using the generalized cross-correlation (GCC) of a stereo signal for automatic speaker verification. In particular, this method focuses on a specific replay attack characteristics when speech is not active. In a genuine speaker case, when speech is not active, the maximum value of GCC is low since surrounding noise arrives from any direction. In contrast, in a replay attack case, even when the played speech is not active, the maximum value of GCC is high since recorded noise or electromagnetic noise is played by a loudspeaker for replay attack. Based on this assumption, two approaches of replay attack detection are introduced. One is to use the minimum value of GCC in short pauses. The other one is to use the average value of GCC in silent periods before the start point and after the end point of a target utterance. In experiments, it is confirmed that the proposed methods achieve low error rates without environmental restrictions.},\n  keywords = {correlation methods;security of data;signal denoising;speaker recognition;generalized cross-correlation;stereo signal;replay attack detection method;GCC;automatic speaker verification;electromagnetic noise;loudspeaker;Microphones;Loudspeakers;Electromagnetic interference;Noise measurement;Trajectory;Europe;Signal processing;automatic speaker verification;spoofing countermeasure;generalized cross correlation;replay attack detection},\n  doi = {10.23919/EUSIPCO.2019.8902985},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533317.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a replay attack detection method using the generalized cross-correlation (GCC) of a stereo signal for automatic speaker verification. In particular, this method focuses on a specific replay attack characteristics when speech is not active. In a genuine speaker case, when speech is not active, the maximum value of GCC is low since surrounding noise arrives from any direction. In contrast, in a replay attack case, even when the played speech is not active, the maximum value of GCC is high since recorded noise or electromagnetic noise is played by a loudspeaker for replay attack. Based on this assumption, two approaches of replay attack detection are introduced. One is to use the minimum value of GCC in short pauses. The other one is to use the average value of GCC in silent periods before the start point and after the end point of a target utterance. In experiments, it is confirmed that the proposed methods achieve low error rates without environmental restrictions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spatial Inference in Sensor Networks using Multiple Hypothesis Testing and Bayesian Clustering.\n \n \n \n \n\n\n \n Gölz, M.; Muma, M.; Halme, T.; Zoubir, A.; and Koivunen, V.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SpatialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902986,\n  author = {M. Gölz and M. Muma and T. Halme and A. Zoubir and V. Koivunen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Spatial Inference in Sensor Networks using Multiple Hypothesis Testing and Bayesian Clustering},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The problem of statistical inference in large-scale sensor networks observing spatially varying fields is addressed. A method based on multiple hypothesis testing and Bayesian clustering is proposed. The method identifies homogeneous regions in a field based on similarity in decision statistics and locations of the sensors. High detection power is achieved while keeping false positives at a tolerable level. A variant of the EM-algorithm is employed to associate sensors with clusters. The performance of the method is studied in simulation using different detection theoretic criteria.},\n  keywords = {Bayes methods;decision theory;expectation-maximisation algorithm;pattern clustering;statistical analysis;wireless sensor networks;Bayesian clustering;homogeneous regions;decision statistics;high detection power;spatial inference;multiple hypothesis testing;statistical inference problem;large-scale sensor networks;EM-algorithm;detection theoretic criteria;Data models;Bayes methods;Clustering algorithms;Probability density function;Signal processing algorithms;Shape;Signal processing;IoT;p-values;Distributed Inference;Statistical Signal Processing;Large-Scale Sensor Networks;BIC},\n  doi = {10.23919/EUSIPCO.2019.8902986},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532052.pdf},\n}\n\n
\n
\n\n\n
\n The problem of statistical inference in large-scale sensor networks observing spatially varying fields is addressed. A method based on multiple hypothesis testing and Bayesian clustering is proposed. The method identifies homogeneous regions in a field based on similarity in decision statistics and locations of the sensors. High detection power is achieved while keeping false positives at a tolerable level. A variant of the EM-algorithm is employed to associate sensors with clusters. The performance of the method is studied in simulation using different detection theoretic criteria.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Semantic Prior Based Generative Adversarial Network for Video Super-Resolution.\n \n \n \n \n\n\n \n Wu, X.; Lucas, A.; Lopez-Tapia, S.; Wang, X.; Kim, Y. H.; Molina, R.; and Katsaggelos, A. K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SemanticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902987,\n  author = {X. Wu and A. Lucas and S. Lopez-Tapia and X. Wang and Y. H. Kim and R. Molina and A. K. Katsaggelos},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Semantic Prior Based Generative Adversarial Network for Video Super-Resolution},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Semantic information is widely used in the deep learning literature to improve the performance of visual media processing. In this work, we propose a semantic prior based Generative Adversarial Network (GAN) model for video super-resolution. The model fully utilizes various texture styles from different semantic categories of video-frame patches, contributing to more accurate and efficient learning for the generator. Based on the GAN framework, we introduce the semantic prior by making use of the spatial feature transform during the learning process of the generator. The patch-wise semantic prior is extracted on the whole video frame by a semantic segmentation network. A hybrid loss function is designed to guide the learning performance. Experimental results show that our proposed model is advantageous in sharpening video frames, reducing noise and artifacts, and recovering realistic textures.},\n  keywords = {image resolution;image segmentation;image texture;learning (artificial intelligence);video signal processing;video super-resolution;semantic information;deep learning;visual media processing;video-frame patches;semantic segmentation network;semantic prior based generative adversarial network model;Semantics;Training;Generators;Gallium nitride;Generative adversarial networks;Transforms;Image segmentation;Video Super-Resolution;Generative Adversarial Networks;Semantic Segmentation;Spatial Feature Transform;Hybrid loss function},\n  doi = {10.23919/EUSIPCO.2019.8902987},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534014.pdf},\n}\n\n
\n
\n\n\n
\n Semantic information is widely used in the deep learning literature to improve the performance of visual media processing. In this work, we propose a semantic prior based Generative Adversarial Network (GAN) model for video super-resolution. The model fully utilizes various texture styles from different semantic categories of video-frame patches, contributing to more accurate and efficient learning for the generator. Based on the GAN framework, we introduce the semantic prior by making use of the spatial feature transform during the learning process of the generator. The patch-wise semantic prior is extracted on the whole video frame by a semantic segmentation network. A hybrid loss function is designed to guide the learning performance. Experimental results show that our proposed model is advantageous in sharpening video frames, reducing noise and artifacts, and recovering realistic textures.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An EM Algorithm for Joint Dual-Speaker Separation and Dereverberation.\n \n \n \n \n\n\n \n Cohen, N.; Hazan, G.; Schwartz, B.; and Gannot, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902988,\n  author = {N. Cohen and G. Hazan and B. Schwartz and S. Gannot},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {An EM Algorithm for Joint Dual-Speaker Separation and Dereverberation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The scenario of a mixture of two speakers captured by a microphone array in a noisy and reverberant environment is considered. If the problems of source separation and dereverberation are treated separately, performance degradation may result. It is well-known that the performance of blind source separation (BSS) algorithms degrades in the presence of reverberation, unless reverberation effects are properly addressed (leading to the so-called convolutive BSS algorithms). Similarly, the performance of common dereverberation algorithms will severely degrade if an interference signal is also captured by the same microphone array. The aim of the proposed method is to jointly separate and dereverberate the two speech sources, by extending the Kalman expectation-maximization for dereverberation (KEMD) algorithm, previously proposed by the authors. A statistical model is attributed to this scenario, using the convolutive transfer function (CTF) approximation, and the expectation-maximization (EM) scheme is applied to obtain a maximum likelihood (ML) estimate of the parameters. In the expectation step, the separated clean signals are extracted from the observed data by the application of a Kalman Filter, utilizing the parameters that were estimated in the previous iteration. The maximization step updates the parameters estimation according to the Estep output. Simulation results shows that the proposed method improves both the separation of the signals and their overall quality.},\n  keywords = {blind source separation;convolution;expectation-maximisation algorithm;Kalman filters;microphone arrays;reverberation;speaker recognition;speech processing;transfer functions;expectation-maximization scheme;expectation step;separated clean signals;EM algorithm;joint dual-speaker separation;microphone array;noisy environment;reverberant environment;performance degradation;blind source separation algorithms degrades;reverberation effects;convolutive BSS algorithms;common dereverberation algorithms;interference signal;speech sources;Kalman expectation-maximization;dereverberation algorithm;convolutive transfer function approximation;Kalman filters;Signal processing algorithms;Microphones;Reverberation;Europe;Blind source separation;Array processing;blind source separation;dereverberation;expectation-maximization;convolution in STFT},\n  doi = {10.23919/EUSIPCO.2019.8902988},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531645.pdf},\n}\n\n
\n
\n\n\n
\n The scenario of a mixture of two speakers captured by a microphone array in a noisy and reverberant environment is considered. If the problems of source separation and dereverberation are treated separately, performance degradation may result. It is well-known that the performance of blind source separation (BSS) algorithms degrades in the presence of reverberation, unless reverberation effects are properly addressed (leading to the so-called convolutive BSS algorithms). Similarly, the performance of common dereverberation algorithms will severely degrade if an interference signal is also captured by the same microphone array. The aim of the proposed method is to jointly separate and dereverberate the two speech sources, by extending the Kalman expectation-maximization for dereverberation (KEMD) algorithm, previously proposed by the authors. A statistical model is attributed to this scenario, using the convolutive transfer function (CTF) approximation, and the expectation-maximization (EM) scheme is applied to obtain a maximum likelihood (ML) estimate of the parameters. In the expectation step, the separated clean signals are extracted from the observed data by the application of a Kalman Filter, utilizing the parameters that were estimated in the previous iteration. The maximization step updates the parameters estimation according to the Estep output. Simulation results shows that the proposed method improves both the separation of the signals and their overall quality.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Two-Channel Passive Detection Exploiting Cyclostationarity.\n \n \n \n \n\n\n \n Horstmann, S.; Ramírez, D.; and Schreier, P. J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Two-ChannelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902989,\n  author = {S. Horstmann and D. Ramírez and P. J. Schreier},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Two-Channel Passive Detection Exploiting Cyclostationarity},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper addresses a two-channel passive detection problem exploiting cyclostationarity. Given a reference channel (RC) and a surveillance channel (SC), the goal is to detect a target echo present at the surveillance array transmitted by an illuminator of opportunity equipped with multiple antennas. Since common transmission signals are cyclostationary, we exploit this information at the detector. Specifically, we derive an asymptotic generalized likelihood ratio test (GLRT) to detect the presence of a cyclostationary signal at the SC given observations from RC and SC. This detector tests for different covariance structures. Simulation results show good performance of the proposed detector compared to competing techniques that do not exploit cyclostationarity.},\n  keywords = {maximum likelihood estimation;passive radar;radar detection;radar signal processing;signal detection;two-channel passive detection problem;reference channel;surveillance channel;target echo present;surveillance array;common transmission signals;asymptotic generalized likelihood ratio test;cyclostationary signal;channel passive detection;Covariance matrices;Maximum likelihood estimation;Surveillance;Detectors;Europe;Passive radar;Signal processing;Cyclostationarity;generalized likelihood ratio test (GLRT);multiple-input multiple-output (MIMO) passive detection},\n  doi = {10.23919/EUSIPCO.2019.8902989},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529267.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses a two-channel passive detection problem exploiting cyclostationarity. Given a reference channel (RC) and a surveillance channel (SC), the goal is to detect a target echo present at the surveillance array transmitted by an illuminator of opportunity equipped with multiple antennas. Since common transmission signals are cyclostationary, we exploit this information at the detector. Specifically, we derive an asymptotic generalized likelihood ratio test (GLRT) to detect the presence of a cyclostationary signal at the SC given observations from RC and SC. This detector tests for different covariance structures. Simulation results show good performance of the proposed detector compared to competing techniques that do not exploit cyclostationarity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Deep Spatio-Temporal Modeling for Object-Level Gaze-Based Relevance Assessment.\n \n \n \n \n\n\n \n Stavridis, K.; Psaltis, A.; Dimou, A.; Papadopoulos, G. T.; and Daras, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DeepPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902990,\n  author = {K. Stavridis and A. Psaltis and A. Dimou and G. T. Papadopoulos and P. Daras},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Deep Spatio-Temporal Modeling for Object-Level Gaze-Based Relevance Assessment},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The current work investigates the problem of objectlevel relevance assessment prediction, taking into account the user's captured gaze signal (behaviour) and following the Deep Learning (DL) paradigm. Human gaze, as a sub-conscious response, is influenced from several factors related to the human mental activity. Several studies have so far proposed methodologies based on the use of gaze statistical modeling and naive classifiers for assessing images or image patches as relevant or not to the user's interests. Nevertheless, the outstanding majority of literature approaches only relied so far on the use of handcrafted features and relative simple classification schemes. On the contrary, the current work focuses on the use of DL schemes that will enable the modeling of complex patterns in the captured gaze signal and the subsequent derivation of corresponding discriminant features. Novel contributions of this study include: a) the introduction of a large-scale annotated gaze dataset, suitable for training DL models, b) a novel method for gaze modeling, capable of handling gaze sensor errors, and c) a DL based method, able to capture gaze patterns for assessing image objects as relevant or non-relevant, with respect to the user's preferences. Extensive experiments demonstrate the efficiency of the proposed method, taking also into consideration key factors related to the human gaze behaviour.},\n  keywords = {gaze tracking;image classification;learning (artificial intelligence);neural nets;statistical analysis;gaze statistical modeling;image patches;large-scale annotated gaze dataset;DL models;gaze modeling;gaze sensor errors;gaze patterns;human gaze behaviour;object-level gaze-based relevance assessment;human mental activity;gaze signal;deep spatio-temporal modeling;object-level relevance assessment prediction;Task analysis;Predictive models;Search problems;Context modeling;Visualization;Monitoring;Europe;Gaze modeling;DL;relevance assessment},\n  doi = {10.23919/EUSIPCO.2019.8902990},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529318.pdf},\n}\n\n
\n
\n\n\n
\n The current work investigates the problem of objectlevel relevance assessment prediction, taking into account the user's captured gaze signal (behaviour) and following the Deep Learning (DL) paradigm. Human gaze, as a sub-conscious response, is influenced from several factors related to the human mental activity. Several studies have so far proposed methodologies based on the use of gaze statistical modeling and naive classifiers for assessing images or image patches as relevant or not to the user's interests. Nevertheless, the outstanding majority of literature approaches only relied so far on the use of handcrafted features and relative simple classification schemes. On the contrary, the current work focuses on the use of DL schemes that will enable the modeling of complex patterns in the captured gaze signal and the subsequent derivation of corresponding discriminant features. Novel contributions of this study include: a) the introduction of a large-scale annotated gaze dataset, suitable for training DL models, b) a novel method for gaze modeling, capable of handling gaze sensor errors, and c) a DL based method, able to capture gaze patterns for assessing image objects as relevant or non-relevant, with respect to the user's preferences. Extensive experiments demonstrate the efficiency of the proposed method, taking also into consideration key factors related to the human gaze behaviour.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online dictionary learning for single-subject fMRI data unmixing.\n \n \n \n \n\n\n \n Bhanot, A.; Meillier, C.; Heitz, F.; and Harsan, L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OnlinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902991,\n  author = {A. Bhanot and C. Meillier and F. Heitz and L. Harsan},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Online dictionary learning for single-subject fMRI data unmixing},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Independent component analysis (ICA) and dictionary learning (DL) methods are widely used to analyse resting state functional Magnetic Resonance Imaging (rs-fMRI) in multi-subject studies. These methods aim at decomposing the multi-subject data into common spatial abundance maps and their related temporal signatures. We are interested here in such a decomposition for a single-subject rs-fMRI dataset. The above-mentioned methods often fail in this case because the problem becomes too ill-posed, requiring the use of additional prior information and the design of novel regularising constraints. The poor resolution of rs-fMRI data is an additional source of difficulty, yielding noisy and blurry spatial maps. In this paper, we propose a new DL formulation adapted to the unique subject by integrating high-resolution (HR) spatial information to constrain single-subject data unmixing. HR information is provided by the registration of an anatomical atlas on the data set. We show on a quasi-real dataset from mice, the benefit of using an HR spatial segmentation map in the decomposition of low-resolution rs-fMRI.},\n  keywords = {biomedical MRI;brain;image segmentation;independent component analysis;medical image processing;neurophysiology;functional magnetic resonance imaging;multisubject data;related temporal signatures;single-subject rs-fMRI dataset;noisy maps;blurry spatial maps;high-resolution spatial information;HR spatial segmentation map;low-resolution rs-fMRI;spatial abundance maps;online dictionary learning;single-subject fMRI data unmixing;independent component analysis;Functional magnetic resonance imaging;Spatial resolution;Signal processing algorithms;Estimation;Mice;Sparse matrices;Dictionary Learning;resting state fMRI;single-subject rs-fMRI unmixing;high-resolution anatomical atlas},\n  doi = {10.23919/EUSIPCO.2019.8902991},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532678.pdf},\n}\n\n
\n
\n\n\n
\n Independent component analysis (ICA) and dictionary learning (DL) methods are widely used to analyse resting state functional Magnetic Resonance Imaging (rs-fMRI) in multi-subject studies. These methods aim at decomposing the multi-subject data into common spatial abundance maps and their related temporal signatures. We are interested here in such a decomposition for a single-subject rs-fMRI dataset. The above-mentioned methods often fail in this case because the problem becomes too ill-posed, requiring the use of additional prior information and the design of novel regularising constraints. The poor resolution of rs-fMRI data is an additional source of difficulty, yielding noisy and blurry spatial maps. In this paper, we propose a new DL formulation adapted to the unique subject by integrating high-resolution (HR) spatial information to constrain single-subject data unmixing. HR information is provided by the registration of an anatomical atlas on the data set. We show on a quasi-real dataset from mice, the benefit of using an HR spatial segmentation map in the decomposition of low-resolution rs-fMRI.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Single Image Haze Removal Using Conditional Wasserstein Generative Adversarial Networks.\n \n \n \n \n\n\n \n Ebenezer, J. P.; Das, B.; and Mukhopadhyay, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SinglePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902992,\n  author = {J. P. Ebenezer and B. Das and S. Mukhopadhyay},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Single Image Haze Removal Using Conditional Wasserstein Generative Adversarial Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We present a method to restore a clear image from a haze-affected image using a Wasserstein generative adversarial network. As the problem is ill-conditioned, previous methods have required a prior on natural images or multiple images of the same scene. We train a generative adversarial network to learn the probability distribution of clear images conditioned on the haze-affected images using the Wasserstein loss function, using a gradient penalty to enforce the Lipschitz constraint. The method is data-adaptive, end-to-end, and requires no further processing or tuning of parameters. We also incorporate the use of a texturebased loss metric and the L1 loss to improve results, and show that our results are better than the current state-of-the-art.},\n  keywords = {image classification;image enhancement;image restoration;image texture;learning (artificial intelligence);neural nets;single image haze removal;conditional Wasserstein generative adversarial networks;haze-affected image;natural images;Wasserstein loss function;texture-based loss metric;Lipschitz constraint;Generative adversarial networks;Training;Generators;Gallium nitride;Linear programming;Probability distribution;Europe},\n  doi = {10.23919/EUSIPCO.2019.8902992},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533261.pdf},\n}\n\n
\n
\n\n\n
\n We present a method to restore a clear image from a haze-affected image using a Wasserstein generative adversarial network. As the problem is ill-conditioned, previous methods have required a prior on natural images or multiple images of the same scene. We train a generative adversarial network to learn the probability distribution of clear images conditioned on the haze-affected images using the Wasserstein loss function, using a gradient penalty to enforce the Lipschitz constraint. The method is data-adaptive, end-to-end, and requires no further processing or tuning of parameters. We also incorporate the use of a texturebased loss metric and the L1 loss to improve results, and show that our results are better than the current state-of-the-art.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Random Forest on an Embedded Device for Real-time Machine State Classification.\n \n \n \n \n\n\n \n Küppers, F.; Albers, J.; and Haselhoff, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RandomPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902993,\n  author = {F. Küppers and J. Albers and A. Haselhoff},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Random Forest on an Embedded Device for Real-time Machine State Classification},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Heavy machine tools are used in numerous industrial manufacturing processes. Avoiding unplanned maintenance time is crucial and can be achieved by continuous condition monitoring. A more targeted condition monitoring is possible if the operating state of a machine tool is known (e.g. standstill, neutral or cut). However, many manufacturers of control units do not release any information about the machine's operating state. For this reason, we investigated a system-independent approach to determine the operating state in real-time in less than 1 ms. This low runtime is necessary for machine state classification. Since most machine tools vary in individual components, the proposed state detection uses learning algorithms based on the machine's vibration characteristics. In this work we propose the Random Forest algorithm because of its reliable classification performance while keeping explainability of each prediction. Facing the real-time requirements, the Random Forest model has been adapted to an embedded device with very limited resources. Thus, the model prediction has to work with low resource consumption in a very low runtime and without loss in accuracy. We show that this approach is suitable for determining machine operating states in real-time in less than 700 μs with an average accuracy of 96 %.},\n  keywords = {condition monitoring;embedded systems;machine tools;maintenance engineering;production engineering computing;random forests;real-time systems;state estimation;vibrational signal processing;vibrations;embedded device;real-time machine state classification;heavy machine tools;industrial manufacturing process;condition monitoring;random forest algorithm;learning algorithm;vibration characteristics;Random forests;Vibrations;Real-time systems;Decision trees;Vegetation;Training;Condition monitoring;Random Forest;Machine Tool;Classification;State Estimation;Real-Time},\n  doi = {10.23919/EUSIPCO.2019.8902993},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528381.pdf},\n}\n\n
\n
\n\n\n
\n Heavy machine tools are used in numerous industrial manufacturing processes. Avoiding unplanned maintenance time is crucial and can be achieved by continuous condition monitoring. A more targeted condition monitoring is possible if the operating state of a machine tool is known (e.g. standstill, neutral or cut). However, many manufacturers of control units do not release any information about the machine's operating state. For this reason, we investigated a system-independent approach to determine the operating state in real-time in less than 1 ms. This low runtime is necessary for machine state classification. Since most machine tools vary in individual components, the proposed state detection uses learning algorithms based on the machine's vibration characteristics. In this work we propose the Random Forest algorithm because of its reliable classification performance while keeping explainability of each prediction. Facing the real-time requirements, the Random Forest model has been adapted to an embedded device with very limited resources. Thus, the model prediction has to work with low resource consumption in a very low runtime and without loss in accuracy. We show that this approach is suitable for determining machine operating states in real-time in less than 700 μs with an average accuracy of 96 %.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multiscale Permutation Entropy: Statistical Characterization on Autoregressive and Moving Average Processes.\n \n \n \n \n\n\n \n Dàvalos, A.; Jabloun, M.; Ravier, P.; and Buttelli, O.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"MultiscalePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902994,\n  author = {A. {Dàvalos} and M. Jabloun and P. Ravier and O. Buttelli},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Multiscale Permutation Entropy: Statistical Characterization on Autoregressive and Moving Average Processes},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Multiscale Permutation Entropy (MPE), an extension of Permutation Entropy (PE), was proposed to better capture the information content in long range trends. This technique has been extensively used in biomedical applications for diagnosis purposes. Although PE theory is well established and explored, there is still a lack of theoretical development for MPE. In the present paper, we expand the theory by formulating an explicit MPE model of first order Autoregressive (AR) and Moving Average (MA) processes, which are well known and used in signal modeling. We first build the autocorrelation function of coarse-grained AR and MA models, which are a prerequisite for MPE calculation. Next, we use the resulting autocorrelation functions to establish the theoretical value of MPE as a function of time scale and AR or MA parameters. The theoretical result is tested against MPE measurements from simulations. We found the MPE of the 1o order AR model to converge to the maximum entropy with increasing time scale. Nonetheless, the convergence is not always monotonic. For AR parameter values greater than the Golden Ratio, the MPE curve presents a local minimum at a time scale different than one, which implies a more regular structure than the one measured with PE. The MPE of the 1o order MA model converges rapidly to the maximum entropy with increasing time scales, regardless of the MA parameter value, which is in accordance to our expectations.},\n  keywords = {autoregressive moving average processes;correlation methods;entropy;signal processing;maximum entropy;multiscale permutation entropy;signal modeling;autocorrelation function;order AR model;MPE curve measurements;MA parameter models;statistical characterization;moving average processes;autoregressive processes;order MA model;biomedical applications;diagnosis purposes;Entropy;Correlation;Biological system modeling;Signal processing;Biomedical measurement;Europe;Indexes;Autoregressive model;Moving Average model;Multiscale Permutation Entropy;Coarse-graining procedure},\n  doi = {10.23919/EUSIPCO.2019.8902994},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533150.pdf},\n}\n\n
\n
\n\n\n
\n Multiscale Permutation Entropy (MPE), an extension of Permutation Entropy (PE), was proposed to better capture the information content in long range trends. This technique has been extensively used in biomedical applications for diagnosis purposes. Although PE theory is well established and explored, there is still a lack of theoretical development for MPE. In the present paper, we expand the theory by formulating an explicit MPE model of first order Autoregressive (AR) and Moving Average (MA) processes, which are well known and used in signal modeling. We first build the autocorrelation function of coarse-grained AR and MA models, which are a prerequisite for MPE calculation. Next, we use the resulting autocorrelation functions to establish the theoretical value of MPE as a function of time scale and AR or MA parameters. The theoretical result is tested against MPE measurements from simulations. We found the MPE of the 1o order AR model to converge to the maximum entropy with increasing time scale. Nonetheless, the convergence is not always monotonic. For AR parameter values greater than the Golden Ratio, the MPE curve presents a local minimum at a time scale different than one, which implies a more regular structure than the one measured with PE. The MPE of the 1o order MA model converges rapidly to the maximum entropy with increasing time scales, regardless of the MA parameter value, which is in accordance to our expectations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gated Graph Convolutional Recurrent Neural Networks.\n \n \n \n \n\n\n \n Ruiz, L.; Gama, F.; and Ribeiro, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"GatedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902995,\n  author = {L. Ruiz and F. Gama and A. Ribeiro},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Gated Graph Convolutional Recurrent Neural Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Graph processes model a number of important problems such as identifying the epicenter of an earthquake or predicting weather. In this paper, we propose a Graph Convolutional Recurrent Neural Network (GCRNN) architecture specifically tailored to deal with these problems. GCRNNs use convolutional filter banks to keep the number of trainable parameters independent of the size of the graph and of the time sequences considered. We also put forward Gated GCRNNs, a time-gated variation of GCRNNs akin to LSTMs. When compared with GNNs and another graph recurrent architecture in experiments using both synthetic and real-word data, GCRNNs significantly improve performance while using considerably less parameters.},\n  keywords = {channel bank filters;convolutional neural nets;graph theory;neural net architecture;recurrent neural nets;convolutional filter banks;earthquake;graph convolutional recurrent neural network architecture;epicenter;LSTMs;gated GCRNNs;Recurrent neural networks;Logic gates;Convolution;Transforms;Computer architecture;Convolutional neural networks;Data models;graph neural networks;recurrent neural networks;gating;graph processes},\n  doi = {10.23919/EUSIPCO.2019.8902995},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533746.pdf},\n}\n\n
\n
\n\n\n
\n Graph processes model a number of important problems such as identifying the epicenter of an earthquake or predicting weather. In this paper, we propose a Graph Convolutional Recurrent Neural Network (GCRNN) architecture specifically tailored to deal with these problems. GCRNNs use convolutional filter banks to keep the number of trainable parameters independent of the size of the graph and of the time sequences considered. We also put forward Gated GCRNNs, a time-gated variation of GCRNNs akin to LSTMs. When compared with GNNs and another graph recurrent architecture in experiments using both synthetic and real-word data, GCRNNs significantly improve performance while using considerably less parameters.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Early SKIP Mode Decision Method in HEVC Based on Perceptual Distortion Measure.\n \n \n \n \n\n\n \n Kim, J.; and Izquierdo, E.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EarlyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902996,\n  author = {J. Kim and E. Izquierdo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Early SKIP Mode Decision Method in HEVC Based on Perceptual Distortion Measure},\n  year = {2019},\n  pages = {1-5},\n  abstract = {An effective fast SKIP mode decision method is proposed for the High Efficiency Video Coding (HEVC) encoding. In order to determine the best Prediction Unit (PU) mode for a given Coding Unit (CU), the reference software for HEVC checks every PU mode candidate. This causes enormous computational complexity, especially in HEVC inter coding, which should be tackled for a fast encoder. An algorithm is proposed which exploits the fact that the SKIP mode can be directly decided as an optimal PU mode when reconstructed CUs of all candidates are similar. This is because bits for the SKIP mode is pretty small compared to other candidates. Thus, the distortion value between reconstructed CUs in 2N × 2N Merge and SKIP mode is measured in terms of the Human Visual System (HVS). If it is determined as unnoticeable, then a given CU is encoded in the SKIP mode. Experimental results show that the proposed method can save encoding time by 48.11 % and 41.66 % on average with minor objective quality losses under the Random Access (RA) and Low Delay (LD)-B configurations respectively.},\n  keywords = {computational complexity;rate distortion theory;video coding;HEVC inter coding;optimal PU mode;perceptual distortion measure;effective fast SKIP mode decision method;High Efficiency Video Coding encoding;Prediction Unit mode;Coding Unit;PU mode candidate;human visual system;Distortion;Copper;Encoding;Distortion measurement;Software;Visualization;Electrostatic discharges;HEVC;perceptual quality;SKIP mode;fast encoders;spatial-JND},\n  doi = {10.23919/EUSIPCO.2019.8902996},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528715.pdf},\n}\n\n
\n
\n\n\n
\n An effective fast SKIP mode decision method is proposed for the High Efficiency Video Coding (HEVC) encoding. In order to determine the best Prediction Unit (PU) mode for a given Coding Unit (CU), the reference software for HEVC checks every PU mode candidate. This causes enormous computational complexity, especially in HEVC inter coding, which should be tackled for a fast encoder. An algorithm is proposed which exploits the fact that the SKIP mode can be directly decided as an optimal PU mode when reconstructed CUs of all candidates are similar. This is because bits for the SKIP mode is pretty small compared to other candidates. Thus, the distortion value between reconstructed CUs in 2N × 2N Merge and SKIP mode is measured in terms of the Human Visual System (HVS). If it is determined as unnoticeable, then a given CU is encoded in the SKIP mode. Experimental results show that the proposed method can save encoding time by 48.11 % and 41.66 % on average with minor objective quality losses under the Random Access (RA) and Low Delay (LD)-B configurations respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Linear Approximation of Deep Neural Networks for Efficient Inference on Video Data.\n \n \n \n \n\n\n \n Rueckauer, B.; and Liu, S. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"LinearPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902997,\n  author = {B. Rueckauer and S. -C. Liu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Linear Approximation of Deep Neural Networks for Efficient Inference on Video Data},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Sequential data such as video are characterized by spatio-temporal correlations. As of yet, few deep learning algorithms exploit them to decrease the often massive cost during inference. This work leverages correlations in video data to linearize part of a deep neural network and thus reduce its size and computational cost. Drawing upon the simplicity of the typically used rectifier activation function, we replace the ReLU function by dynamically updating masks. The resulting layer stack is a simple chain of matrix multiplications and bias additions, that can be contracted into a single weight matrix and bias vector. Inference then reduces to an affine transformation of the input sequence with these contracted parameters. We show that the method is akin to approximating the neural network with a first-order Taylor expansion around a dynamically updating reference point. The proposed algorithm is evaluated on a denoising convolutional autoencoder.},\n  keywords = {approximation theory;image denoising;inference mechanisms;learning (artificial intelligence);matrix multiplication;neural nets;transfer functions;vectors;video signal processing;sequential data;spatio-temporal correlations;deep learning algorithms;video data;deep neural network;rectifier activation function;ReLU function;single weight matrix;bias vector;dynamically updating reference point;linear approximation;matrix multiplications;first-order Taylor expansion;denoising convolutional autoencoder;Neurons;Biological neural networks;Taylor series;Convolution;Noise reduction;Task analysis;Correlation;Deep neural networks;video;sequential data;linearization;compression},\n  doi = {10.23919/EUSIPCO.2019.8902997},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534153.pdf},\n}\n\n
\n
\n\n\n
\n Sequential data such as video are characterized by spatio-temporal correlations. As of yet, few deep learning algorithms exploit them to decrease the often massive cost during inference. This work leverages correlations in video data to linearize part of a deep neural network and thus reduce its size and computational cost. Drawing upon the simplicity of the typically used rectifier activation function, we replace the ReLU function by dynamically updating masks. The resulting layer stack is a simple chain of matrix multiplications and bias additions, that can be contracted into a single weight matrix and bias vector. Inference then reduces to an affine transformation of the input sequence with these contracted parameters. We show that the method is akin to approximating the neural network with a first-order Taylor expansion around a dynamically updating reference point. The proposed algorithm is evaluated on a denoising convolutional autoencoder.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Deep Neural Network Based Poetic Meter Classification Using Musical Texture Feature Fusion.\n \n \n \n \n\n\n \n Rajan, R.; and Raju, A. A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DeepPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902998,\n  author = {R. Rajan and A. A. Raju},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Deep Neural Network Based Poetic Meter Classification Using Musical Texture Feature Fusion},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, a meter classification scheme is proposed using musical texture features (MTF) with a deep neural network (DNN) and a hybrid Gaussian mixture model-deep neural network(GMM-DNN) framework. The performance of the proposed system is evaluated using a newly created poetic corpus in Malayalam, one of the prominent languages in India and compared the performance with support vector machine (SVM) classifier. Initially, a baseline-mel-frequency cepstral coefficient (MFCC) based experiment is performed. Later, the MTF are fused with MFCC. Whilst the MFCC system reports an overall accuracy of 78.33%, the fused system reports an accuracy of 86.66% in the hybrid GMM-DNN framework. The overall accuracies obtained for DNN and GMM-DNN are 85.83%, and 86.66%, respectively. The architectural choice of DNN based classifier using GMM derived features on the feature fusion paradigm showed improvement in the performance. The proposed system shows the promise of deep learning methodologies and the effectiveness of MTF in recognizing meters from recited poems.},\n  keywords = {cepstral analysis;feature extraction;Gaussian processes;image texture;learning (artificial intelligence);music;natural language processing;neural nets;pattern classification;speech recognition;support vector machines;poetic meter classification;MTF;support vector machine classifier;baseline-mel-frequency cepstral coefficient based experiment;MFCC system;hybrid GMM-DNN framework;DNN based classifier;deep learning methodologies;musical texture feature fusion;hybrid Gaussian mixture model-deep neural network framework;SVM;Meters;Music;Histograms;Mel frequency cepstral coefficient;Support vector machines;Computational modeling;Feature extraction;meter;timbre;melodic;fusion;rhythm;hybrid;deep learning},\n  doi = {10.23919/EUSIPCO.2019.8902998},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533664.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a meter classification scheme is proposed using musical texture features (MTF) with a deep neural network (DNN) and a hybrid Gaussian mixture model-deep neural network(GMM-DNN) framework. The performance of the proposed system is evaluated using a newly created poetic corpus in Malayalam, one of the prominent languages in India and compared the performance with support vector machine (SVM) classifier. Initially, a baseline-mel-frequency cepstral coefficient (MFCC) based experiment is performed. Later, the MTF are fused with MFCC. Whilst the MFCC system reports an overall accuracy of 78.33%, the fused system reports an accuracy of 86.66% in the hybrid GMM-DNN framework. The overall accuracies obtained for DNN and GMM-DNN are 85.83%, and 86.66%, respectively. The architectural choice of DNN based classifier using GMM derived features on the feature fusion paradigm showed improvement in the performance. The proposed system shows the promise of deep learning methodologies and the effectiveness of MTF in recognizing meters from recited poems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low Complexity Robust Adaptive Beamformer Based On Parallel RLMS and Kalman RLMS.\n \n \n \n \n\n\n \n Akkad, G.; Mansour, A.; ElHassan, B. A.; Srar, J.; Najem, M.; and Roy, F. L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"LowPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8902999,\n  author = {G. Akkad and A. Mansour and B. A. ElHassan and J. Srar and M. Najem and F. L. Roy},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Low Complexity Robust Adaptive Beamformer Based On Parallel RLMS and Kalman RLMS},\n  year = {2019},\n  pages = {1-5},\n  abstract = {To ease spectral congestion and enhance frequency reuse, researchers are targeting smart antenna systems using spatial multiplexing and adaptive signal processing techniques. Moreover, the accuracy and efficiency of such systems is highly dependent on the adaptive algorithms they employ. A popular, adaptive beamforming algorithm, widely used in smart antennas, is the Recursive Least Square (RLS) algorithm. While, the classical RLS implementation achieves high convergence, it still suffers from its inability to track the target of interest. Recently, a new adaptive algorithm called Recursive Least Square - Least Mean Square (RLMS) which employs a RLS stage followed by a Least Mean Square (LMS) algorithm stage and separated by an estimate of the array image vector, i.e. steering vector, has been proposed. RLMS outperforms previous RLS and LMS variants, with superior convergence and tracking capabilities, at the cost of a moderate increase in computational complexity. In this paper, an enhanced, low complexity parallel version of the cascade RLMS is presented by eliminating the need for computing the array image vector cascading stage. Hence, For an antenna of N elements our strategy can reduce the complexity of the system by 20N multiplications, 6N additions and 2N divisions. Moreover, a new Kalman based parallel RLMS (RKLMS) method is also proposed, where the LMS stage is replaced by a Kalman implementation of the classical LMS, and compared under low Signal to Interference plus Noise ratios (SINR). Simulation results show identical performance for the parallel RLMS, cascaded RLMS at 10dB and superior performance and robustness for the RKLMS on low SINR cases up to -10dB.},\n  keywords = {adaptive antenna arrays;adaptive signal processing;array signal processing;computational complexity;convergence of numerical methods;frequency allocation;interference (signal);Kalman filters;least mean squares methods;recursive estimation;low complexity robust adaptive beamformer;Kalman RLMS;spectral congestion;frequency reuse;smart antenna systems;spatial multiplexing signal processing techniques;adaptive signal processing techniques;popular beamforming algorithm;adaptive beamforming algorithm;smart antennas;Recursive Least Square algorithm;classical RLS implementation;high convergence;RLS stage;Least Mean Square algorithm stage;array image vector;steering vector;LMS variants;tracking capabilities;computational complexity;enhanced complexity parallel version;low complexity parallel version;cascade RLMS;parallel RLMS method;LMS stage;Kalman implementation;classical LMS;cascaded RLMS;robustness;low SINR cases;noise figure 10.0 dB;Signal processing algorithms;Complexity theory;Kalman filters;Convergence;Signal to noise ratio;Array signal processing;Antennas;LMS;RLMS;Kalman Filter;Steering Vector;Multi Antenna;Adaptive Beamforming;KRLMS;MIMO;SINR;Spatial Multiplexing;RLS},\n  doi = {10.23919/EUSIPCO.2019.8902999},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528153.pdf},\n}\n\n
\n
\n\n\n
\n To ease spectral congestion and enhance frequency reuse, researchers are targeting smart antenna systems using spatial multiplexing and adaptive signal processing techniques. Moreover, the accuracy and efficiency of such systems is highly dependent on the adaptive algorithms they employ. A popular, adaptive beamforming algorithm, widely used in smart antennas, is the Recursive Least Square (RLS) algorithm. While, the classical RLS implementation achieves high convergence, it still suffers from its inability to track the target of interest. Recently, a new adaptive algorithm called Recursive Least Square - Least Mean Square (RLMS) which employs a RLS stage followed by a Least Mean Square (LMS) algorithm stage and separated by an estimate of the array image vector, i.e. steering vector, has been proposed. RLMS outperforms previous RLS and LMS variants, with superior convergence and tracking capabilities, at the cost of a moderate increase in computational complexity. In this paper, an enhanced, low complexity parallel version of the cascade RLMS is presented by eliminating the need for computing the array image vector cascading stage. Hence, For an antenna of N elements our strategy can reduce the complexity of the system by 20N multiplications, 6N additions and 2N divisions. Moreover, a new Kalman based parallel RLMS (RKLMS) method is also proposed, where the LMS stage is replaced by a Kalman implementation of the classical LMS, and compared under low Signal to Interference plus Noise ratios (SINR). Simulation results show identical performance for the parallel RLMS, cascaded RLMS at 10dB and superior performance and robustness for the RKLMS on low SINR cases up to -10dB.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Super-Resolution on Degraded Low-Resolution Images Using Convolutional Neural Networks.\n \n \n \n \n\n\n \n Albluwi, F.; Krylov, V. A.; and Dahyot, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Super-ResolutionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903000,\n  author = {F. Albluwi and V. A. Krylov and R. Dahyot},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Super-Resolution on Degraded Low-Resolution Images Using Convolutional Neural Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Single Image Super-Resolution (SISR) has witnessed a dramatic improvement in recent years through the use of deep learning and, in particular, convolutional neural networks (CNN). In this work we address reconstruction from low-resolution images and consider as well degrading factors in images such as blurring. To address this challenging problem, we propose a new architecture to tackle blur with the down-sampling of images by extending the DBSRCNN architecture [1]. We validate our new architecture (DBSR) experimentally against several state of the art super-resolution techniques.},\n  keywords = {convolutional neural nets;image reconstruction;image resolution;image sampling;learning (artificial intelligence);low-resolution images;convolutional neural networks;deep learning;single image super-resolution;DBSRCNN architecture;Feature extraction;Degradation;Kernel;Image reconstruction;Training;Image super-resolution;image deblurring;deep learning;CNN},\n  doi = {10.23919/EUSIPCO.2019.8903000},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533420.pdf},\n}\n\n
\n
\n\n\n
\n Single Image Super-Resolution (SISR) has witnessed a dramatic improvement in recent years through the use of deep learning and, in particular, convolutional neural networks (CNN). In this work we address reconstruction from low-resolution images and consider as well degrading factors in images such as blurring. To address this challenging problem, we propose a new architecture to tackle blur with the down-sampling of images by extending the DBSRCNN architecture [1]. We validate our new architecture (DBSR) experimentally against several state of the art super-resolution techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hyperspectral and Multispectral Image Fusion based on a Non-locally Centralized Sparse Model and Adaptive Spatial-Spectral Dictionaries.\n \n \n \n \n\n\n \n Arias, K.; Vargas, E.; and Arguello, H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"HyperspectralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903001,\n  author = {K. Arias and E. Vargas and H. Arguello},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Hyperspectral and Multispectral Image Fusion based on a Non-locally Centralized Sparse Model and Adaptive Spatial-Spectral Dictionaries},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Hyperspectral (HS) imaging systems are useful in a diverse range of applications that involve detection and classification tasks. However, the low spatial resolution of hyperspectral images may limit the performance of the involved tasks in such applications. In the last years, fusing the information of a HS image with high spatial resolution multispectral (MS) or panchromatic (PAN) images has been widely studied to enhance the spatial resolution. Image fusion has been formulated as an inverse problem whose solution is a HS image which assumed to be sparse in an analytic or learned dictionary. This work proposes a non-local centralized sparse representation model on a set of learned dictionaries in order to regularize the conventional fusion problem. The dictionaries are learned from the observed data taking advantage of the high spectral correlation within the HS image and the non-local self-similarity over the spatial domain of the MS image. Then, conditionally on these dictionaries, the fusion problem is solved by an alternating iterative numerical algorithm. Experimental results with real data show that the proposed method outperforms the state-of-the-art methods under different quantitative assessments.},\n  keywords = {hyperspectral imaging;image classification;image fusion;image representation;image resolution;iterative methods;adaptive spatial-spectral dictionaries;hyperspectral imaging systems;classification tasks;low spatial resolution;HS image;nonlocal centralized sparse representation model;dictionary learning;nonlocal self-similarity;MS image;multispectral image fusion;Dictionaries;Spatial resolution;Image fusion;Estimation;Hyperspectral imaging;Iterative algorithms;Signal processing algorithms},\n  doi = {10.23919/EUSIPCO.2019.8903001},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533690.pdf},\n}\n\n
\n
\n\n\n
\n Hyperspectral (HS) imaging systems are useful in a diverse range of applications that involve detection and classification tasks. However, the low spatial resolution of hyperspectral images may limit the performance of the involved tasks in such applications. In the last years, fusing the information of a HS image with high spatial resolution multispectral (MS) or panchromatic (PAN) images has been widely studied to enhance the spatial resolution. Image fusion has been formulated as an inverse problem whose solution is a HS image which assumed to be sparse in an analytic or learned dictionary. This work proposes a non-local centralized sparse representation model on a set of learned dictionaries in order to regularize the conventional fusion problem. The dictionaries are learned from the observed data taking advantage of the high spectral correlation within the HS image and the non-local self-similarity over the spatial domain of the MS image. Then, conditionally on these dictionaries, the fusion problem is solved by an alternating iterative numerical algorithm. Experimental results with real data show that the proposed method outperforms the state-of-the-art methods under different quantitative assessments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic playlist generation using Convolutional Neural Networks and Recurrent Neural Networks.\n \n \n \n \n\n\n \n Irene, R. T.; Borrelli, C.; Zanoni, M.; Buccoli, M.; and Sarti, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903002,\n  author = {R. T. Irene and C. Borrelli and M. Zanoni and M. Buccoli and A. Sarti},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic playlist generation using Convolutional Neural Networks and Recurrent Neural Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Nowadays, a great part of music consumption on music streaming services are based on playlists. Playlists are still mainly manually generated by expert curators, or users, process that in several cases is not a feasible with huge amount of music to deal with. There is the need of effective automatic playlist generation techniques. Traditional approaches to the problem are based on building a sequence of music pieces that satisfies some manually defined criteria. However, being the playlist generation a highly subjective procedure, to define an a-priori criterion can be an hard task in several cases. In this study we propose an automatic playlist generation approach which analyses hand-crafted playlists, understands their structure and evolution and generates new playlists accordingly. We adopt Recurrent Neural Network (RNN) for the sequence modelling. Moreover, since the representation model adopted to describe each song is determinant and is also connected to the human perception, we take advantages of Convolutions Neural Network (CNN) to learn meaningful audio descriptors.},\n  keywords = {convolutional neural nets;learning (artificial intelligence);music;recurrent neural nets;music consumption;music streaming services;expert curators;music pieces;automatic playlist generation approach;Recurrent Neural Network;Convolutions Neural Network;hand-crafted playlists;automatic playlist generation techniques;Feature extraction;Task analysis;Training;Recurrent neural networks;Mood;Convolutional neural networks;automatic playlist generation;deep learning;machine learning;music recommendation;music organization},\n  doi = {10.23919/EUSIPCO.2019.8903002},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533791.pdf},\n}\n\n
\n
\n\n\n
\n Nowadays, a great part of music consumption on music streaming services are based on playlists. Playlists are still mainly manually generated by expert curators, or users, process that in several cases is not a feasible with huge amount of music to deal with. There is the need of effective automatic playlist generation techniques. Traditional approaches to the problem are based on building a sequence of music pieces that satisfies some manually defined criteria. However, being the playlist generation a highly subjective procedure, to define an a-priori criterion can be an hard task in several cases. In this study we propose an automatic playlist generation approach which analyses hand-crafted playlists, understands their structure and evolution and generates new playlists accordingly. We adopt Recurrent Neural Network (RNN) for the sequence modelling. Moreover, since the representation model adopted to describe each song is determinant and is also connected to the human perception, we take advantages of Convolutions Neural Network (CNN) to learn meaningful audio descriptors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Deep Learning Based Localization of Near-Field Sources with Exact Spherical Wavefront Model.\n \n \n \n \n\n\n \n Liu, W.; Xin, J.; Zuo, W.; Li, J.; Zheng, N.; and Sano, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DeepPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903003,\n  author = {W. Liu and J. Xin and W. Zuo and J. Li and N. Zheng and A. Sano},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Deep Learning Based Localization of Near-Field Sources with Exact Spherical Wavefront Model},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Source localization for near-field narrowband signal is an important topic in array signal processing. Deep neural network (DNN) based methods are data-driven and free of pre-assumptions about data model and are expected to learn the intricate nonlinear structure in large data sets. This paper proposes a framework of DNN where a regression layer is utilized to address the problem of near-field source localization. Unlike previous studies in which DOA estimation is modeled as a classification problem and have a relatively low resolution, we exploit a regression model and aim to improve the estimation accuracy. In the training stage, we propose a novel form of feature representation to take full advantage of the convolution networks. In addition, the architecture of deep neural networks is well designed taking in to consideration the trade-off between the expression ability and under-training risks. The simulation results show that the proposed approach has a rather high validation accuracy with a high resolution, and also outperforms some conventional methods in adverse environments such as low signal to noise ratio (SNR) or small number of snapshots.},\n  keywords = {array signal processing;convolutional neural nets;direction-of-arrival estimation;learning (artificial intelligence);regression analysis;convolution networks;deep neural networks;deep learning based localization;near-field sources;spherical wavefront model;near-field narrowband signal;array signal processing;deep neural network based methods;data model;data sets;regression layer;near-field source localization;DOA estimation;classification problem;regression model;Direction-of-arrival estimation;Estimation;Signal to noise ratio;Training;Neural networks;Covariance matrices;Deep learning;Source localization;deep neural network (DNN);near-field signal;regression model},\n  doi = {10.23919/EUSIPCO.2019.8903003},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532474.pdf},\n}\n\n
\n
\n\n\n
\n Source localization for near-field narrowband signal is an important topic in array signal processing. Deep neural network (DNN) based methods are data-driven and free of pre-assumptions about data model and are expected to learn the intricate nonlinear structure in large data sets. This paper proposes a framework of DNN where a regression layer is utilized to address the problem of near-field source localization. Unlike previous studies in which DOA estimation is modeled as a classification problem and have a relatively low resolution, we exploit a regression model and aim to improve the estimation accuracy. In the training stage, we propose a novel form of feature representation to take full advantage of the convolution networks. In addition, the architecture of deep neural networks is well designed taking in to consideration the trade-off between the expression ability and under-training risks. The simulation results show that the proposed approach has a rather high validation accuracy with a high resolution, and also outperforms some conventional methods in adverse environments such as low signal to noise ratio (SNR) or small number of snapshots.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Satellite Links Integrated in 5G SDN-enabled Backhaul Networks: An Iterative Joint Power and Flow Assignment.\n \n \n \n \n\n\n \n Lagunas, E.; Lei, L.; Chatzinotas, S.; and Ottersten, B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SatellitePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903004,\n  author = {E. Lagunas and L. Lei and S. Chatzinotas and B. Ottersten},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Satellite Links Integrated in 5G SDN-enabled Backhaul Networks: An Iterative Joint Power and Flow Assignment},\n  year = {2019},\n  pages = {1-5},\n  abstract = {While recent technological advances have focused on how to provide increased data rate in 5G Radio Access Network (RAN), very few attention has been paid on how the current microwave backhaul network can handle the transport of such huge traffic flows. In this paper, we address two key aspects for the efficient adaptation of the backhaul network to the upcoming SDN-enabled 5G wireless communications. First, we consider the availability of dedicated satellite links to off-load the traffic from the terrestrial backhaul links. These satellite links operate on the non-exclusive Ka band, which is shared with the terrestrial microwave backhaul links. Second, and given the interference limited scenario, we address the power control and flow assignment of the resulting satellite-terrestrial network. While most of the flow assignment works consider the link rates as fixed and given, here we provide a novel formulation which links the achievable link rates with the assigned transmission power. In particular, we propose an iterative joint power control and flow assignment which takes into account the long propagation delay imposed by the satellite links. We transform the resulting non-convex problem into a Geometric Programming (GP) problem, which can be optimally solved in an efficient way. Simulation results validate and demonstrate the benefits of the proposed approach.},\n  keywords = {5G mobile communication;cellular radio;geometric programming;iterative methods;power control;quality of service;radio access networks;radiofrequency interference;satellite links;software defined networking;telecommunication congestion control;telecommunication traffic;satellite links;5G SDN-enabled backhaul networks;data rate;5G Radio Access Network;current microwave backhaul network;traffic flows;nonexclusive Ka band;terrestrial microwave backhaul links;satellite-terrestrial network;flow assignment;achievable link rates;assigned transmission power;iterative joint power control;nonconvex problem;iterative joint power-flow assignment;SDN-enabled 5G wireless communications;interference limited scenario;geometric programming problem;long propagation delay;GP problem;Interference;Signal to noise ratio;Satellite broadcasting;Satellites;Throughput;Power control;Network topology},\n  doi = {10.23919/EUSIPCO.2019.8903004},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531244.pdf},\n}\n\n
\n
\n\n\n
\n While recent technological advances have focused on how to provide increased data rate in 5G Radio Access Network (RAN), very few attention has been paid on how the current microwave backhaul network can handle the transport of such huge traffic flows. In this paper, we address two key aspects for the efficient adaptation of the backhaul network to the upcoming SDN-enabled 5G wireless communications. First, we consider the availability of dedicated satellite links to off-load the traffic from the terrestrial backhaul links. These satellite links operate on the non-exclusive Ka band, which is shared with the terrestrial microwave backhaul links. Second, and given the interference limited scenario, we address the power control and flow assignment of the resulting satellite-terrestrial network. While most of the flow assignment works consider the link rates as fixed and given, here we provide a novel formulation which links the achievable link rates with the assigned transmission power. In particular, we propose an iterative joint power control and flow assignment which takes into account the long propagation delay imposed by the satellite links. We transform the resulting non-convex problem into a Geometric Programming (GP) problem, which can be optimally solved in an efficient way. Simulation results validate and demonstrate the benefits of the proposed approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed Adaptive Node-Specific Signal Estimation in a Wireless Sensor Network with Partial Prior Knowledge of the Desired Source Steering Vector.\n \n \n \n \n\n\n \n Rompaey, R. V.; and Moonen, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DistributedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903005,\n  author = {R. V. Rompaey and M. Moonen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Distributed Adaptive Node-Specific Signal Estimation in a Wireless Sensor Network with Partial Prior Knowledge of the Desired Source Steering Vector},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper first introduces the centralized generalized eigenvalue decomposition (GEVD) based multichannel Wiener filter (MWF) with prior knowledge for node-specific signal estimation in a wireless sensor network (WSN), where (some of) the nodes have partial prior knowledge of the desired source steering vector. A distributed adaptive estimation algorithm for a fully-connected WSN is then proposed demonstrating that this MWF can be obtained by letting the nodes work on compressed (i.e. reduced-dimensional) sensor signals compared to the centralized approach. The algorithm can be used in applications such as speech enhancement in an acoustic sensor network, where (some of) the nodes nodes have prior knowledge on the location of the desired speech source and on their local microphone array geometry or have access to clean noise reference signals.},\n  keywords = {adaptive estimation;correlation methods;eigenvalues and eigenfunctions;microphone arrays;noise abatement;speech enhancement;Wiener filters;wireless sensor networks;distributed adaptive node-specific signal estimation;wireless sensor network;partial prior knowledge;desired source steering vector;centralized generalized eigenvalue decomposition based multichannel Wiener filter;MWF;WSN;distributed adaptive estimation algorithm;acoustic sensor network;desired speech source;clean noise reference signals;Estimation;Correlation;Wireless sensor networks;Signal processing algorithms;Knowledge engineering;Europe;Signal processing;Wireless Sensor Networks (WSN);distributed estimation;multichannel Wiener filter (MWF);generalized eigenvalue decomposition (GEVD)},\n  doi = {10.23919/EUSIPCO.2019.8903005},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533682.pdf},\n}\n\n
\n
\n\n\n
\n This paper first introduces the centralized generalized eigenvalue decomposition (GEVD) based multichannel Wiener filter (MWF) with prior knowledge for node-specific signal estimation in a wireless sensor network (WSN), where (some of) the nodes have partial prior knowledge of the desired source steering vector. A distributed adaptive estimation algorithm for a fully-connected WSN is then proposed demonstrating that this MWF can be obtained by letting the nodes work on compressed (i.e. reduced-dimensional) sensor signals compared to the centralized approach. The algorithm can be used in applications such as speech enhancement in an acoustic sensor network, where (some of) the nodes nodes have prior knowledge on the location of the desired speech source and on their local microphone array geometry or have access to clean noise reference signals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compression Efficiency and Computational Cost Comparison between AV1 and HEVC Encoders.\n \n \n \n \n\n\n \n Bender, I.; Palomino, D.; Agostini, L.; Correa, G.; and Porto, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CompressionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903006,\n  author = {I. Bender and D. Palomino and L. Agostini and G. Correa and M. Porto},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Compression Efficiency and Computational Cost Comparison between AV1 and HEVC Encoders},\n  year = {2019},\n  pages = {1-5},\n  abstract = {High Efficiency Video Coding (HEVC) is the current state-of-the-art standard for video compression, finalized by ISO/IEC and ITU-T in 2013. However, as HEVC is protected by several patents and is subject to royalty policies, more than thirty companies recently joined efforts to develop a royalty-free codec, named AOMedia Video 1 (AV1). In the forthcoming years, AV1 is expected to be adopted by several companies and streaming service providers as the main digital video format. In this paper, the compression efficiency and the computational cost of AV1 is compared to its main concurrent, the HEVC standard. The methodology employed in this work uses equivalent quantization parameters in both encoders to measure compression efficiency in terms of Bjontegaard Delta (BD)-rate more accurately than related works. Experimental results show that the AV1 reference software requires an average computational cost 14.64 times greater than the HEVC Model, with a BD-rate increase of 16.35% in comparison to that encoder.},\n  keywords = {data compression;IEC standards;ISO standards;patents;video codecs;video coding;HEVC encoders;high efficiency video coding;video compression;royalty-free codec;digital video format;AV1 reference software;AOMedia Video 1;patents;Bjontegaard Delta;ISO-IEC;ITU-T;Bit rate;Codecs;Computational efficiency;Quantization (signal);Standards;Encoding;Streaming media;AV1;HEVC;compression efficiency;computational cost},\n  doi = {10.23919/EUSIPCO.2019.8903006},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532277.pdf},\n}\n\n
\n
\n\n\n
\n High Efficiency Video Coding (HEVC) is the current state-of-the-art standard for video compression, finalized by ISO/IEC and ITU-T in 2013. However, as HEVC is protected by several patents and is subject to royalty policies, more than thirty companies recently joined efforts to develop a royalty-free codec, named AOMedia Video 1 (AV1). In the forthcoming years, AV1 is expected to be adopted by several companies and streaming service providers as the main digital video format. In this paper, the compression efficiency and the computational cost of AV1 is compared to its main concurrent, the HEVC standard. The methodology employed in this work uses equivalent quantization parameters in both encoders to measure compression efficiency in terms of Bjontegaard Delta (BD)-rate more accurately than related works. Experimental results show that the AV1 reference software requires an average computational cost 14.64 times greater than the HEVC Model, with a BD-rate increase of 16.35% in comparison to that encoder.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse Bayesian Learning for a Bilinear Calibration Model and Mismatched CRB.\n \n \n \n \n\n\n \n Gopala, K.; Thomas, C. K.; and Slock, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903007,\n  author = {K. Gopala and C. K. Thomas and D. Slock},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse Bayesian Learning for a Bilinear Calibration Model and Mismatched CRB},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Variational Bayesian (VB) estimation allows for approximate Bayesian inference. It determines the closest approximation in factored form of the posterior distribution by minimizing the Kullback-Leibler distance to the posterior distribution even if this last one is difficult to determine. In spite of this well motivated derivation, the performance of VB techniques is not very clear, especially compared to more classical performance bounds. In this paper we explore recently introduced mismatched Cramer-Rao bounds (mCRB) for Bayesian estimation in the context of VB estimation. We focus on the case of bilinear signal models. One particular application of these models arises in the context of internal relative reciprocity calibration of Massive antenna arrays, in which the received signals are linear in terms of an intra array channel and the relative calibration factors. We have recently shown that a VB approach allows for particularly improved estimation performance that goes beyond the classical CRB, which is now confirmed by the mCRB.},\n  keywords = {approximation theory;Bayes methods;calibration;linear antenna arrays;variational techniques;closest approximation;factored form;posterior distribution;Kullback-Leibler distance;motivated derivation;VB techniques;classical performance;mismatched Cramer-Rao bounds;VB estimation;bilinear signal models;internal relative reciprocity calibration;Massive antenna arrays;received signals;intra array channel;relative calibration factors;VB approach;particularly improved estimation performance;classical CRB;sparse Bayesian learning;bilinear calibration model;mismatched CRB;Variational Bayesian estimation;approximate Bayesian inference;Calibration;Estimation;Bayes methods;Receiving antennas;Transmitting antennas;Channel estimation;Antenna arrays},\n  doi = {10.23919/EUSIPCO.2019.8903007},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534156.pdf},\n}\n\n
\n
\n\n\n
\n Variational Bayesian (VB) estimation allows for approximate Bayesian inference. It determines the closest approximation in factored form of the posterior distribution by minimizing the Kullback-Leibler distance to the posterior distribution even if this last one is difficult to determine. In spite of this well motivated derivation, the performance of VB techniques is not very clear, especially compared to more classical performance bounds. In this paper we explore recently introduced mismatched Cramer-Rao bounds (mCRB) for Bayesian estimation in the context of VB estimation. We focus on the case of bilinear signal models. One particular application of these models arises in the context of internal relative reciprocity calibration of Massive antenna arrays, in which the received signals are linear in terms of an intra array channel and the relative calibration factors. We have recently shown that a VB approach allows for particularly improved estimation performance that goes beyond the classical CRB, which is now confirmed by the mCRB.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Extracting Proprioceptive Information By Analyzing Rotating Range Sensors Induced Distortion.\n \n \n \n \n\n\n \n Vivet, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ExtractingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903009,\n  author = {D. Vivet},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Extracting Proprioceptive Information By Analyzing Rotating Range Sensors Induced Distortion},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The increased autonomy of robots is directly linked to their capability to perceive their environment. Simultaneous Localization and Mapping (SLAM) techniques, which associate perception and movement, are particularly interesting because they provide advanced autonomy to vehicles in the field of Intelligent Transportation Systems (ITS). Such ITS are based on both proprioceptive sensors to estimate their dynamics and exteroceptive sensors in order to perceive the surrounding of the vehicle. This second class of sensor is dominated by camera and rotating range sensors such as LIDAR or RADAR. Indeed, the majority of intelligent vehicles uses today 2D/3D laser or panoramic radar to localize itself or detect and avoid obstacles. The use of a rotating range sensor, while moving at high speed, creates distortions in the collected data. Such an effect is, in the majority of studies, ignored or considered as noise and then corrected, based on additional proprioceptive sensors or localization systems. In this study, rather than considering distortion as a noise, we consider that it contains all the information about the vehicles displacement. We propose to extract this information from such distortion without any other information than the exteroceptive sensor data. The idea is to resort to velocimetry by only analyzing the distortion of the measurements. As a result, we propose a linear and angular velocities estimator of the mobile robot based on the distortion analysis.},\n  keywords = {cameras;collision avoidance;mobile robots;robot vision;sensors;SLAM (robots);2D laser;3D laser;Simultaneous Localization and Mapping techniques;rotating range sensors induced distortion;distortion analysis;exteroceptive sensor data;vehicles displacement;localization systems;proprioceptive sensors;rotating range sensor;panoramic radar;intelligent vehicles;camera;exteroceptive sensors;Intelligent Transportation Systems;associate perception;proprioceptive information;Distortion;Laser radar;Three-dimensional displays;Simultaneous localization and mapping;Mathematical model;Rotating range sensor;LIDAR;Distortion;Odometry;Localization;Dead-reckognign},\n  doi = {10.23919/EUSIPCO.2019.8903009},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531254.pdf},\n}\n\n
\n
\n\n\n
\n The increased autonomy of robots is directly linked to their capability to perceive their environment. Simultaneous Localization and Mapping (SLAM) techniques, which associate perception and movement, are particularly interesting because they provide advanced autonomy to vehicles in the field of Intelligent Transportation Systems (ITS). Such ITS are based on both proprioceptive sensors to estimate their dynamics and exteroceptive sensors in order to perceive the surrounding of the vehicle. This second class of sensor is dominated by camera and rotating range sensors such as LIDAR or RADAR. Indeed, the majority of intelligent vehicles uses today 2D/3D laser or panoramic radar to localize itself or detect and avoid obstacles. The use of a rotating range sensor, while moving at high speed, creates distortions in the collected data. Such an effect is, in the majority of studies, ignored or considered as noise and then corrected, based on additional proprioceptive sensors or localization systems. In this study, rather than considering distortion as a noise, we consider that it contains all the information about the vehicles displacement. We propose to extract this information from such distortion without any other information than the exteroceptive sensor data. The idea is to resort to velocimetry by only analyzing the distortion of the measurements. As a result, we propose a linear and angular velocities estimator of the mobile robot based on the distortion analysis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Non-iterative Reconstruction Algorithm for Single Pixel Spectral Imaging with Side Information.\n \n \n \n \n\n\n \n Bacca, J.; Correa, C. V.; and Arguello, H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903010,\n  author = {J. Bacca and C. V. Correa and H. Arguello},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Non-iterative Reconstruction Algorithm for Single Pixel Spectral Imaging with Side Information},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Compressive spectral imaging (CSI) allows the acquisition of spatial information of a scene along multiple spectral bands using fewer projected measurements than traditional scanning methods. In general, to obtain high resolution spatial and spectral information, expensive detectors and sophisticated optical devices are required. Fortunately, the single-pixel camera (SPC) is a low-cost optical architecture since it uses a light sensor compared to CSI architectures with larger sensors. However, this advantage is overshadowed by the large number of projections needed to recover the spectral image, which entails large acquisition times. Alternatively, high-resolution spectral images can be obtained using SPC with side-information, without significantly increasing acquisition costs. However, this approach retrieves improved resolution images applying iterative and computationally expensive algorithms. This paper proposes a non-iterative method that combines the spectral information of SPC and the side information of a multispectral image to recover high resolution spatial and spectral information. The proposed fast compressive spectral imaging (FCSI) reconstruction method exploits the fact that the spatial-spectral data lie in a low dimensional subspace. This methodology allows to reduce the number of required measurements in the SPC as well as the computation time of the reconstruction. Simulations and experimental results show the effectiveness of the proposed method compared to similar approaches, both in reconstruction quality and sample complexity.},\n  keywords = {cameras;data compression;image coding;image reconstruction;image resolution;iterative methods;FCSI reconstruction method;optical device architecture;spatial-spectral data;fast compressive spectral imaging reconstruction method;multispectral imaging;computationally expensive algorithms;iterative algorithms;improved image resolution;high-resolution spectral imaging;SPC;single-pixel camera;scanning methods;spatial information acquisition;single pixel spectral imaging;noniterative reconstruction algorithm;Image reconstruction;Imaging;Signal processing algorithms;Image resolution;Image coding;Frequency modulation;Biomedical measurement},\n  doi = {10.23919/EUSIPCO.2019.8903010},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532935.pdf},\n}\n\n
\n
\n\n\n
\n Compressive spectral imaging (CSI) allows the acquisition of spatial information of a scene along multiple spectral bands using fewer projected measurements than traditional scanning methods. In general, to obtain high resolution spatial and spectral information, expensive detectors and sophisticated optical devices are required. Fortunately, the single-pixel camera (SPC) is a low-cost optical architecture since it uses a light sensor compared to CSI architectures with larger sensors. However, this advantage is overshadowed by the large number of projections needed to recover the spectral image, which entails large acquisition times. Alternatively, high-resolution spectral images can be obtained using SPC with side-information, without significantly increasing acquisition costs. However, this approach retrieves improved resolution images applying iterative and computationally expensive algorithms. This paper proposes a non-iterative method that combines the spectral information of SPC and the side information of a multispectral image to recover high resolution spatial and spectral information. The proposed fast compressive spectral imaging (FCSI) reconstruction method exploits the fact that the spatial-spectral data lie in a low dimensional subspace. This methodology allows to reduce the number of required measurements in the SPC as well as the computation time of the reconstruction. Simulations and experimental results show the effectiveness of the proposed method compared to similar approaches, both in reconstruction quality and sample complexity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improving the Performance of Lightweight CNN models using Minimum Enclosing Ball Regularization.\n \n \n \n \n\n\n \n Tzelepi, M.; and Tefas, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903011,\n  author = {M. Tzelepi and A. Tefas},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Improving the Performance of Lightweight CNN models using Minimum Enclosing Ball Regularization},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The aim of this paper is two-fold. First, we propose lightweight CNN models, capable of effectively operating on-drone for various classification problems, emerging in the context of media coverage of specific sport events by drones, i.e. crowd, football player, and bicycle detection. Subsequently, we propose a regularization method, namely Minimum Enclosing Ball regularization, in order to improve the generalization ability of the proposed models. The experimental evaluation on three datasets indicates the effectiveness of the proposed regularizer.},\n  keywords = {autonomous aerial vehicles;convolutional neural nets;helicopters;image classification;learning (artificial intelligence);object detection;robot vision;lightweight CNN models;classification problems;media coverage;drones;regularization method;minimum enclosing ball regularization;Training;Drones;Support vector machines;Task analysis;Convolution;Deep learning;Europe;Minimum Enclosing Ball Regularization;Convolutional Neural Networks;Drones;Deep Learning.},\n  doi = {10.23919/EUSIPCO.2019.8903011},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533406.pdf},\n}\n\n
\n
\n\n\n
\n The aim of this paper is two-fold. First, we propose lightweight CNN models, capable of effectively operating on-drone for various classification problems, emerging in the context of media coverage of specific sport events by drones, i.e. crowd, football player, and bicycle detection. Subsequently, we propose a regularization method, namely Minimum Enclosing Ball regularization, in order to improve the generalization ability of the proposed models. The experimental evaluation on three datasets indicates the effectiveness of the proposed regularizer.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Surrogate Rehabilitative Time Series Data for Image-based Deep Learning.\n \n \n \n \n\n\n \n Lee, T. E. K. M.; Kuah, Y. L.; Leo, K.; Sanei, S.; Chew, E.; and Zhao, L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SurrogatePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903012,\n  author = {T. E. K. M. Lee and Y. L. Kuah and K. Leo and S. Sanei and E. Chew and L. Zhao},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Surrogate Rehabilitative Time Series Data for Image-based Deep Learning},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Big Data comprise the tools to analyse vast stores of data generated by the myriad of powerful, low-cost processors, sensors and networks around us. The spiralling demand for multi-sensored personal communication devices has played a major role in this, producing images and leaving digital trails of transactions and texts to be mined for patterns. Consequently, the cutting edge of data analysis tools has been targeted for images. However the copious amounts of data required to successfully train these tools are not available in several fields such as rehabilitation where there are constraints on data collection. And yet the need for timely clinical assessments grows.We consider how to address this situation by generating synthetic, surrogate data which preserves many properties of the original. Here we introduce a new application of surrogate time series in a novel classification scheme, compare methods of converting these into images and use a state of the art neural network framework for a successful improvement in classification results.This is a significant contribution to the art, demonstrating how scarce time series data can be successfully augmented to take advantage of cutting edge analytical tools.},\n  keywords = {Big Data;data analysis;data mining;image classification;learning (artificial intelligence);neural nets;time series;data analysis tools;data collection;timely clinical assessments;synthetic data;surrogate data;surrogate time series;neural network framework;scarce time series data;edge analytical tools;Big Data;multisensored personal communication devices;digital trails;rehabilitative time series data;Time series analysis;Sensors;Deep learning;Tools;Artificial neural networks;Transforms;Image edge detection;Deep learning;rehabilitation;accelerometer;surrogate data;time series},\n  doi = {10.23919/EUSIPCO.2019.8903012},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531494.pdf},\n}\n\n
\n
\n\n\n
\n Big Data comprise the tools to analyse vast stores of data generated by the myriad of powerful, low-cost processors, sensors and networks around us. The spiralling demand for multi-sensored personal communication devices has played a major role in this, producing images and leaving digital trails of transactions and texts to be mined for patterns. Consequently, the cutting edge of data analysis tools has been targeted for images. However the copious amounts of data required to successfully train these tools are not available in several fields such as rehabilitation where there are constraints on data collection. And yet the need for timely clinical assessments grows.We consider how to address this situation by generating synthetic, surrogate data which preserves many properties of the original. Here we introduce a new application of surrogate time series in a novel classification scheme, compare methods of converting these into images and use a state of the art neural network framework for a successful improvement in classification results.This is a significant contribution to the art, demonstrating how scarce time series data can be successfully augmented to take advantage of cutting edge analytical tools.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n L0-Norm Adaptive Volterra Filters.\n \n \n \n \n\n\n \n Yazdanpanah, H.; Carini, A.; and Lima, M. V. S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"L0-NormPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903013,\n  author = {H. Yazdanpanah and A. Carini and M. V. S. Lima},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {L0-Norm Adaptive Volterra Filters},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The paper addresses adaptive algorithms for Volterra filter identification capable of exploiting the sparsity of nonlinear systems. While the l1-norm of the coefficient vector is often employed to promote sparsity, it has been shown in the literature that superior results can be achieved using an approximation of the l0-norm.Thus, in this paper, the Geman-McClure function is adopted to approximate the l0-norm and to derive l0-norm adaptiveVolterra filters. It is shown through experimental results, also involving a real-world system, that the proposed adaptive filters can obtain improved performance in comparison with classical approaches and l1-norm solutions.},\n  keywords = {adaptive filters;approximation theory;nonlinear filters;Volterra filter identification;nonlinear systems;coefficient vector;Geman-McClure function;real-world system;norm solutions;L1norm adaptive Volterra filters;Approximation algorithms;Signal processing algorithms;Adaptive systems;Convergence;Europe;Signal processing;Nonlinear systems;Nonlinear adaptive filter;Volterra series;sparsity;l₀-norm;Geman-McClure function},\n  doi = {10.23919/EUSIPCO.2019.8903013},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533636.pdf},\n}\n\n
\n
\n\n\n
\n The paper addresses adaptive algorithms for Volterra filter identification capable of exploiting the sparsity of nonlinear systems. While the l1-norm of the coefficient vector is often employed to promote sparsity, it has been shown in the literature that superior results can be achieved using an approximation of the l0-norm.Thus, in this paper, the Geman-McClure function is adopted to approximate the l0-norm and to derive l0-norm adaptiveVolterra filters. It is shown through experimental results, also involving a real-world system, that the proposed adaptive filters can obtain improved performance in comparison with classical approaches and l1-norm solutions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Speech-Based Stress Classification based on Modulation Spectral Features and Convolutional Neural Networks.\n \n \n \n \n\n\n \n Avila, A. R.; Kshirsagar, S. R.; Tiwari, A.; Lafond, D.; O’Shaughnessy, D.; and Falk, T. H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Speech-BasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903014,\n  author = {A. R. Avila and S. R. Kshirsagar and A. Tiwari and D. Lafond and D. O’Shaughnessy and T. H. Falk},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Speech-Based Stress Classification based on Modulation Spectral Features and Convolutional Neural Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Interest in stress recognition has notably increased over the past few years. In this work, we focus on recognizing stress from speech. We propose the use of modulation spectral features as input to a convolutional neural network (CNN) for classifying stress. As benchmark, the OpenSMILE features used in the INTERSPEECH 2010 Paralinguistic Challenge is adopted and evaluated with a support vector machine (SVM) and a deep neural network (DNN) based backends. Experiments are performed with the well-known Speech Under Simulated and Actual Stress (SUSAS) database. Performances are investigated considering 2-class, 4-class and 9-class classification problems. Results show that the proposed approach outperforms the benchmark on a challenging 9-class classification task with accuracy as high as 70% representing gains of roughly 18% over the benchmark.},\n  keywords = {convolutional neural nets;feature extraction;learning (artificial intelligence);pattern classification;signal classification;speech processing;speech recognition;support vector machines;9-class classification problems;challenging 9-class classification task;modulation spectral features;convolutional neural network;stress recognition;OpenSMILE features;INTERSPEECH 2010 Paralinguistic Challenge;support vector machine;deep neural network based backends;actual stress database;speech-based stress classification;4-class classification problem;2-class classification problem;Stress;Modulation;Task analysis;Feature extraction;Benchmark testing;Speech recognition;Support vector machines;Stress detection;modulation spectrum;convolutional neural network},\n  doi = {10.23919/EUSIPCO.2019.8903014},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533864.pdf},\n}\n\n
\n
\n\n\n
\n Interest in stress recognition has notably increased over the past few years. In this work, we focus on recognizing stress from speech. We propose the use of modulation spectral features as input to a convolutional neural network (CNN) for classifying stress. As benchmark, the OpenSMILE features used in the INTERSPEECH 2010 Paralinguistic Challenge is adopted and evaluated with a support vector machine (SVM) and a deep neural network (DNN) based backends. Experiments are performed with the well-known Speech Under Simulated and Actual Stress (SUSAS) database. Performances are investigated considering 2-class, 4-class and 9-class classification problems. Results show that the proposed approach outperforms the benchmark on a challenging 9-class classification task with accuracy as high as 70% representing gains of roughly 18% over the benchmark.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sensor Selection and Rate Distribution Based Beamforming in Wireless Acoustic Sensor Networks.\n \n \n \n \n\n\n \n Zhang, J.; Heusdens, R.; and Hendriks, R. C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SensorPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903015,\n  author = {J. Zhang and R. Heusdens and R. C. Hendriks},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Sensor Selection and Rate Distribution Based Beamforming in Wireless Acoustic Sensor Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Power usage is an important aspect of wireless acoustic sensor networks (WASNs) and reducing the amount of information that is to be transmitted is one effective way to save it. In previous contributions, we presented sensor selection as well as rate distribution methods to reduce the power usage of beamforming algorithms in WASNs. Taking only transmission power into account, it was shown that rate distribution is a generalization of sensor selection and that rate distribution is more efficient than sensor selection with respect to the power usage versus performance trade-off. However, this excludes the energy consumption that it takes to keep the WASN nodes activated. In this paper, we present a more detailed comparison between sensor selection and rate-allocation by taking also into account the power to keep sensors activated for centralized WASNs. The framework is formulated by minimizing the total power usage, while lower bounding the noise reduction performance. Numerical results show that whether rate distribution is more efficient than sensor selection depends on the actual power that is used to keep sensors activated.},\n  keywords = {acoustic communication (telecommunication);array signal processing;interference suppression;telecommunication power management;wireless sensor networks;sensor selection;wireless acoustic sensor networks;rate distribution based beamforming methods;power usage;beamforming algorithms;transmission power;performance trade-off;energy consumption;WASN nodes;rate-allocation;noise reduction performance;lower bounding;Microphones;Array signal processing;Noise reduction;Wireless sensor networks;Wireless communication;Resource management;Acoustic sensors;Wireless acoustic sensor networks;beamforming;sensor selection;rate distribution;energy consumption},\n  doi = {10.23919/EUSIPCO.2019.8903015},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570525825.pdf},\n}\n\n
\n
\n\n\n
\n Power usage is an important aspect of wireless acoustic sensor networks (WASNs) and reducing the amount of information that is to be transmitted is one effective way to save it. In previous contributions, we presented sensor selection as well as rate distribution methods to reduce the power usage of beamforming algorithms in WASNs. Taking only transmission power into account, it was shown that rate distribution is a generalization of sensor selection and that rate distribution is more efficient than sensor selection with respect to the power usage versus performance trade-off. However, this excludes the energy consumption that it takes to keep the WASN nodes activated. In this paper, we present a more detailed comparison between sensor selection and rate-allocation by taking also into account the power to keep sensors activated for centralized WASNs. The framework is formulated by minimizing the total power usage, while lower bounding the noise reduction performance. Numerical results show that whether rate distribution is more efficient than sensor selection depends on the actual power that is used to keep sensors activated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n All-Powerful Learning Algorithm for the Priority Access in Cognitive Network.\n \n \n \n \n\n\n \n Almasri, M.; Mansour, A.; Moy, C.; Assoum, A.; Osswald, C.; and Jeune, D. L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"All-PowerfulPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903016,\n  author = {M. Almasri and A. Mansour and C. Moy and A. Assoum and C. Osswald and D. L. Jeune},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {All-Powerful Learning Algorithm for the Priority Access in Cognitive Network},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we propose the All-Powerful Learning (APL) algorithm for multiple Secondary Users (SUs) that considers the priority access and the dynamic multi-user access, where the number of SUs changes over time. To the best of our knowledge, APL is the first learning algorithm that successfully handles the dynamic users with the priority access. APL does not require any cooperation or prior information (e.g. the number of users in the network, or the number of available channels, or the total number of iterations) as do many existing algorithms. We should emphasize that the knowledge of previous parameters can make all these algorithms impractical and difficult to apply. The experimental results show the superiority of APL compared to existing algorithms.},\n  keywords = {cognitive radio;learning (artificial intelligence);telecommunication computing;APL;multiple secondary users;priority access;multiuser access;dynamic users;cognitive network;all-powerful learning algorithm;SU changes;Signal processing algorithms;Heuristic algorithms;Indexes;Channel estimation;Music;Europe;Signal processing;Multi-Armed Bandit;Priority Access;Competitive Network;Opportunistic Spectrum Access;All-Powerful Learning Algorithm;Cognitive Network},\n  doi = {10.23919/EUSIPCO.2019.8903016},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570526749.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose the All-Powerful Learning (APL) algorithm for multiple Secondary Users (SUs) that considers the priority access and the dynamic multi-user access, where the number of SUs changes over time. To the best of our knowledge, APL is the first learning algorithm that successfully handles the dynamic users with the priority access. APL does not require any cooperation or prior information (e.g. the number of users in the network, or the number of available channels, or the total number of iterations) as do many existing algorithms. We should emphasize that the knowledge of previous parameters can make all these algorithms impractical and difficult to apply. The experimental results show the superiority of APL compared to existing algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n MIMO Receiver with Reduced Number of RF Chains Based on 4D Array and Software Defined Radio.\n \n \n \n \n\n\n \n Bogdan, G.; Godziszewski, K.; and Yashchyshyn, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"MIMOPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903017,\n  author = {G. Bogdan and K. Godziszewski and Y. Yashchyshyn},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {MIMO Receiver with Reduced Number of RF Chains Based on 4D Array and Software Defined Radio},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Multiple-input multiple-output (MIMO) technique is expected to be extensively used in future wireless systems to increase capacity of wireless channels. Nevertheless, the capacity gain comes with a cost. In particular, multiple analog radio frequency (RF) chains are required at both the transmitter and the receiver, thereby leading to a higher implementation cost, more power consumption, and lower energy efficiency. This paper presents a receiver with only one RF chain for 2 × 2 MIMO transmission system. Proposed design is based on a time modulated antenna array (TMAA) with beam-steering functionality and a wideband software defined radio (SDR). Spatial streams were created by demultiplexing spectral components (sidebands) generated by periodical ON/OFF switching. Number of RF chains was reduced from 2 to 1 while maintaining sufficiently low correlation of spatial streams, which is required to perform MIMO transmission. Feasibility of proposed technique is evaluated by means of experimental verification.},\n  keywords = {antenna arrays;beam steering;demultiplexing;MIMO communication;radio receivers;software radio;wireless channels;MIMO receiver;RF chains reduced number;4D antenna array;multiple-input multiple-output;power consumption;energy efficiency;time modulated antenna array;beam-steering functionality;demultiplexing spectral component;spatial streams correlation;software defined radio;TMAA;wideband SDR;MIMO communication;Amplitude modulation;Frequency modulation;Radio frequency;Receivers;Wireless communication;Antenna arrays;antenna array;adaptive antenna;4D array;time modulated antenna array;beam-steering;multiple-input multiple-output},\n  doi = {10.23919/EUSIPCO.2019.8903017},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570524431.pdf},\n}\n\n
\n
\n\n\n
\n Multiple-input multiple-output (MIMO) technique is expected to be extensively used in future wireless systems to increase capacity of wireless channels. Nevertheless, the capacity gain comes with a cost. In particular, multiple analog radio frequency (RF) chains are required at both the transmitter and the receiver, thereby leading to a higher implementation cost, more power consumption, and lower energy efficiency. This paper presents a receiver with only one RF chain for 2 × 2 MIMO transmission system. Proposed design is based on a time modulated antenna array (TMAA) with beam-steering functionality and a wideband software defined radio (SDR). Spatial streams were created by demultiplexing spectral components (sidebands) generated by periodical ON/OFF switching. Number of RF chains was reduced from 2 to 1 while maintaining sufficiently low correlation of spatial streams, which is required to perform MIMO transmission. Feasibility of proposed technique is evaluated by means of experimental verification.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Node activity monitoring in heterogeneous networks using energy sensors.\n \n \n \n \n\n\n \n Perez, J.; Via, J.; and Vielva, L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"NodePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903018,\n  author = {J. Perez and J. Via and L. Vielva},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Node activity monitoring in heterogeneous networks using energy sensors},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In Heterogeneous Networks, small cells are usually deployed without operator supervision. Their proper operation highly depends on their self-adaptation capability, especially in dense HetNets where various small cells coexist in the same macrocell. This capability requires the small-cell base stations to continuously sense the radio environment, so they can dynamically adapt their operational setting (e.g. transmission power, carrier/channel selection, etc.) to the environmental conditions. In this work we propose a new method for a small base station to monitor the activity of the rest of nodes in the macrocell. We consider a centralized sensing procedure based on the fusion of the energy levels measured by the users of the small cell at their locations. In particular, we present an efficient algorithm that enables the small base station to monitor the activity of the rest of nodes. In addition, the algorithm also provides the gain of the channels between the nodes and the users of the small cell.},\n  keywords = {cellular radio;sensor fusion;wireless channels;base station;heterogeneous networks;energy sensors;HetNets;small-cell base stations;radio environment;environmental conditions;Sensors;Signal processing algorithms;Monitoring;Base stations;Macrocell networks;Europe;Signal processing;Heterogeneous networks;cooperative sensing;energy detection;least squares.},\n  doi = {10.23919/EUSIPCO.2019.8903018},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533740.pdf},\n}\n\n
\n
\n\n\n
\n In Heterogeneous Networks, small cells are usually deployed without operator supervision. Their proper operation highly depends on their self-adaptation capability, especially in dense HetNets where various small cells coexist in the same macrocell. This capability requires the small-cell base stations to continuously sense the radio environment, so they can dynamically adapt their operational setting (e.g. transmission power, carrier/channel selection, etc.) to the environmental conditions. In this work we propose a new method for a small base station to monitor the activity of the rest of nodes in the macrocell. We consider a centralized sensing procedure based on the fusion of the energy levels measured by the users of the small cell at their locations. In particular, we present an efficient algorithm that enables the small base station to monitor the activity of the rest of nodes. In addition, the algorithm also provides the gain of the channels between the nodes and the users of the small cell.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectrogram Feature Losses for Music Source Separation.\n \n \n \n \n\n\n \n Sahai, A.; Weber, R.; and McWilliams, B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SpectrogramPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903019,\n  author = {A. Sahai and R. Weber and B. McWilliams},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Spectrogram Feature Losses for Music Source Separation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper we study deep learning-based music source separation, and explore using an alternative loss to the standard spectrogram pixel-level L2 loss for model training. Our main contribution is in demonstrating that adding a high-level feature loss term, extracted from the spectrograms using a VGG net, can improve separation quality vis-a-vis a pure pixel-level loss. We show this improvement in the context of the MMDenseNet, a State-of-the-Art deep learning model for this task, for the extraction of drums and vocal sounds from songs in the musdb18 database, covering a broad range of western music genres. We believe that this finding can be generalized and applied to broader machine learning-based systems in the audio domain.},\n  keywords = {audio signal processing;learning (artificial intelligence);music;source separation;spectrogram feature losses;deep learning-based music source separation;alternative loss;standard spectrogram pixel-level;L2 loss;model training;high-level feature loss term;spectrograms;VGG net;separation quality;pure pixel-level loss;State-of-the-Art deep learning model;western music genres;broader machine learning-based systems;Spectrogram;Feature extraction;Source separation;Task analysis;Instruments;Multiple signal classification;Training},\n  doi = {10.23919/EUSIPCO.2019.8903019},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570527378.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we study deep learning-based music source separation, and explore using an alternative loss to the standard spectrogram pixel-level L2 loss for model training. Our main contribution is in demonstrating that adding a high-level feature loss term, extracted from the spectrograms using a VGG net, can improve separation quality vis-a-vis a pure pixel-level loss. We show this improvement in the context of the MMDenseNet, a State-of-the-Art deep learning model for this task, for the extraction of drums and vocal sounds from songs in the musdb18 database, covering a broad range of western music genres. We believe that this finding can be generalized and applied to broader machine learning-based systems in the audio domain.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Coded Aperture Design for Super-Resolution Phase Retrieval.\n \n \n \n \n\n\n \n Bacca, J.; Pinilla, S.; and Arguello, H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CodedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903020,\n  author = {J. Bacca and S. Pinilla and H. Arguello},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Coded Aperture Design for Super-Resolution Phase Retrieval},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Phase retrieval is an inverse problem which consists on estimating a complex signal from intensity-only measurements. Recent works have studied the problem of retrieving the phase of a high-resolution image from low-resolution phaseless measurements, under a setup that records coded diffraction patterns. However, the attainable resolution of the image depends on the sensor characteristics, whose cost increases in proportion to the resolution. Also, this methodology lacks theoretical analysis. Hence, this work derives a super-resolution model from low-resolution coded phaseless measurements, that in contrast with prior contributions, the attainable resolution of the image directly depends on the resolution of the coded aperture. For this model we establish that an image can be recovered (up to a global unimodular constant) with high probability. Also, the theoretical result states that the image reconstruction quality directly depends on the design of the coded aperture. Therefore, a strategy that designs the spatial distribution of the coded aperture is developed. Simulation results show that reconstruction quality using designed coded aperture is higher than the non-designed ensembles.},\n  keywords = {image reconstruction;image resolution;image retrieval;probability;aperture design;super-resolution phase retrieval;intensity-only measurements;high-resolution image;low-resolution phaseless measurements;attainable resolution;super-resolution model;image reconstruction quality;Apertures;Diffraction;Phase measurement;Spatial resolution;X-ray diffraction},\n  doi = {10.23919/EUSIPCO.2019.8903020},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533143.pdf},\n}\n\n
\n
\n\n\n
\n Phase retrieval is an inverse problem which consists on estimating a complex signal from intensity-only measurements. Recent works have studied the problem of retrieving the phase of a high-resolution image from low-resolution phaseless measurements, under a setup that records coded diffraction patterns. However, the attainable resolution of the image depends on the sensor characteristics, whose cost increases in proportion to the resolution. Also, this methodology lacks theoretical analysis. Hence, this work derives a super-resolution model from low-resolution coded phaseless measurements, that in contrast with prior contributions, the attainable resolution of the image directly depends on the resolution of the coded aperture. For this model we establish that an image can be recovered (up to a global unimodular constant) with high probability. Also, the theoretical result states that the image reconstruction quality directly depends on the design of the coded aperture. Therefore, a strategy that designs the spatial distribution of the coded aperture is developed. Simulation results show that reconstruction quality using designed coded aperture is higher than the non-designed ensembles.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n ExPACO: detection of an extended pattern under nonstationary correlated noise by patch covariance modeling.\n \n \n \n \n\n\n \n Flasseur, O.; Denis, L.; Thiébaut, É.; Olivier, T.; and Fournier, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ExPACO:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903021,\n  author = {O. Flasseur and L. Denis and É. Thiébaut and T. Olivier and C. Fournier},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {ExPACO: detection of an extended pattern under nonstationary correlated noise by patch covariance modeling},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In several areas of imaging, it is necessary to detect the weak signal of a known pattern superimposed over a background. Because of its temporal fluctuations, the background may be difficult to suppress. Detection of the pattern then requires a statistical modeling of the background. Due to difficulties related to (i) the estimation of the spatial correlations of the background, and (ii) the application of an optimal detector that accounts for these correlations, it is common practice to neglect them.In this work, spatial correlations at the scale of an image patch are locally estimated based on several background images. A fast algorithm for the computation of detection maps is derived. The proposed approach is evaluated on images obtained from a holographic microscope.},\n  keywords = {correlation methods;covariance matrices;image denoising;object detection;statistical analysis;ExPACO;extended pattern;nonstationary correlated noise;temporal fluctuations;statistical modeling;spatial correlations;optimal detector;image patch;background images;patch covariance modeling;holographic microscope;Correlation;Covariance matrices;Computational modeling;Two dimensional displays;Microscopy;Task analysis;Diffraction;matched filter;patch;shrinkage covariance estimator;correlation},\n  doi = {10.23919/EUSIPCO.2019.8903021},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532796.pdf},\n}\n\n
\n
\n\n\n
\n In several areas of imaging, it is necessary to detect the weak signal of a known pattern superimposed over a background. Because of its temporal fluctuations, the background may be difficult to suppress. Detection of the pattern then requires a statistical modeling of the background. Due to difficulties related to (i) the estimation of the spatial correlations of the background, and (ii) the application of an optimal detector that accounts for these correlations, it is common practice to neglect them.In this work, spatial correlations at the scale of an image patch are locally estimated based on several background images. A fast algorithm for the computation of detection maps is derived. The proposed approach is evaluated on images obtained from a holographic microscope.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Graph Filter Design Using Sum-of-squares Representation.\n \n \n \n \n\n\n \n Aittomäki, T.; and Leus, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"GraphPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903022,\n  author = {T. Aittomäki and G. Leus},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Graph Filter Design Using Sum-of-squares Representation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Graph filters are an essential part of signal processing on graphs enabling one to modify the spectral content of the graph signals. This paper proposes a graph filter optimization method with an exact control of the ripple on the passband and the stopband of the filter. The proposed filter design method is based on the sum-of-squares representation of positive polynomials. The optimization of both FIR and ARMA graph filters is convex with the proposed method.},\n  keywords = {FIR filters;graph theory;optimisation;polynomials;sum-of-squares representation;graph filter design;signal processing;spectral content;graph signals;graph filter optimization method;positive polynomials;Finite impulse response filters;Passband;Optimization;Design methodology;Europe;Convex functions;Graph filters;filter design;convex optimization},\n  doi = {10.23919/EUSIPCO.2019.8903022},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533877.pdf},\n}\n\n
\n
\n\n\n
\n Graph filters are an essential part of signal processing on graphs enabling one to modify the spectral content of the graph signals. This paper proposes a graph filter optimization method with an exact control of the ripple on the passband and the stopband of the filter. The proposed filter design method is based on the sum-of-squares representation of positive polynomials. The optimization of both FIR and ARMA graph filters is convex with the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Energy-Efficient Improper Signaling for K-User Interference Channels.\n \n \n \n \n\n\n \n Soleymani, M.; Lameiro, C.; Santamaria, I.; and Schreier, P. J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Energy-EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903023,\n  author = {M. Soleymani and C. Lameiro and I. Santamaria and P. J. Schreier},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Energy-Efficient Improper Signaling for K-User Interference Channels},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper investigates the energy efficiency (EE) of improper Gaussian signaling (IGS) in a K-user interference channel (IC). IGS allows unequal variances and/or correlation between the real and imaginary parts, and it has recently been shown to be advantageous in various interference-limited scenarios. In this paper, we propose an energy-efficient IGS design for the K-user IC, which is based on a separate optimization of the powers and complementary variances of the users. We compare the EE region achieved by the proposed scheme with that achieved by conventional proper signaling and show that IGS can significantly improve the EE region.},\n  keywords = {energy conservation;Gaussian processes;optimisation;radiofrequency interference;telecommunication power management;telecommunication signalling;wireless channels;K-user interference channel;improper Gaussian signaling;interference-limited scenarios;energy-efficient IGS design;K-user IC;complementary variances;energy-efficient improper signaling;EE;optimization;Integrated circuits;Optimization;Europe;Interference channels;Signal to noise ratio;Energy efficiency;improper Gaussian signaling;interference channels;multiuser},\n  doi = {10.23919/EUSIPCO.2019.8903023},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530773.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates the energy efficiency (EE) of improper Gaussian signaling (IGS) in a K-user interference channel (IC). IGS allows unequal variances and/or correlation between the real and imaginary parts, and it has recently been shown to be advantageous in various interference-limited scenarios. In this paper, we propose an energy-efficient IGS design for the K-user IC, which is based on a separate optimization of the powers and complementary variances of the users. We compare the EE region achieved by the proposed scheme with that achieved by conventional proper signaling and show that IGS can significantly improve the EE region.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n LOSoft: ℓ0 Minimization via Soft Thresholding.\n \n \n \n\n\n \n Sadeghi, M.; Ghayem, F.; Babaie-Zadeh, M.; Chatterjee, S.; Skoglund, M.; and Jutten, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903024,\n  author = {M. Sadeghi and F. Ghayem and M. Babaie-Zadeh and S. Chatterjee and M. Skoglund and C. Jutten},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {LOSoft: ℓ0 Minimization via Soft Thresholding},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We propose a new algorithm for finding sparse solution of a linear system of equations using I0 minimization. The proposed algorithm relies on approximating the non-smooth I0 (pseudo) norm with a differentiable function. Unlike other approaches, we utilize a particular definition of I0 norm which states that the I0 norm of a vector can be computed as the I1 norm of its sign vector. Then, using a smooth approximation of the sign function, the problem is converted to I1 minimization. This problem is solved via iterative proximal algorithms. Our simulations on both synthetic and real data demonstrate the promising performance of the proposed scheme.},\n  keywords = {approximation theory;convex programming;iterative methods;minimisation;sparse matrices;vectors;sparse solution;sign vector;smooth approximation;sign function;iterative proximal algorithms;LOSoft;soft thresholding;I1 minimization;ℓ0 minimization;Signal processing algorithms;Approximation algorithms;Minimization;Noise measurement;Europe;Signal processing;Iterative algorithms;Compressed sensing;sparse representation;iterative hard thresholding;iterative soft thresholding;proximal algorithms},\n  doi = {10.23919/EUSIPCO.2019.8903024},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n We propose a new algorithm for finding sparse solution of a linear system of equations using I0 minimization. The proposed algorithm relies on approximating the non-smooth I0 (pseudo) norm with a differentiable function. Unlike other approaches, we utilize a particular definition of I0 norm which states that the I0 norm of a vector can be computed as the I1 norm of its sign vector. Then, using a smooth approximation of the sign function, the problem is converted to I1 minimization. This problem is solved via iterative proximal algorithms. Our simulations on both synthetic and real data demonstrate the promising performance of the proposed scheme.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Parameter-free Small Variance Asymptotics for Dictionary Learning.\n \n \n \n \n\n\n \n Dang, H. -.; and Elvira, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Parameter-freePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903025,\n  author = {H. -P. Dang and C. Elvira},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Parameter-free Small Variance Asymptotics for Dictionary Learning},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Learning redundant dictionaries for sparse representation from sets of patches has proven its efficiency in solving inverse problems. However, the optimization process often calls for the prior knowledge of the noise level or the regularization parameters for sparse encoding. In a Bayesian framework, these parameters are integrated within the probabilistic model through the choice of prior distributions. Although efficient, these methods come with numerical disadvantages for large-scale data. Small-variance asymptotic (SVA) approaches pave the way to much cheaper though approximate methods for inference by taking advantage from a fruitful interaction between Bayesian models and optimization algorithms. We propose such a SVA analysis of a Bayesian dictionary learning (DL) model where the noise level and regularization level are jointly estimated so that nearly no parameter tuning is needed. We analyze this algorithm and demonstrate its efficiency on real data to illustrate the relevance of the resulting dictionaries.},\n  keywords = {Bayes methods;image representation;inverse problems;learning (artificial intelligence);SVA analysis;Bayesian dictionary learning model;noise level;regularization level;parameter tuning;parameter-free;variance asymptotics;redundant dictionaries;sparse representation;inverse problems;optimization process;regularization parameters;sparse encoding;Bayesian framework;probabilistic model;prior distributions;large-scale data;small-variance asymptotic;approximate methods;Bayesian models;optimization algorithms;Bayes methods;Signal processing algorithms;Dictionaries;Noise level;Numerical models;Machine learning;Approximation algorithms;Bayesian model;small variance asymptotic;sparse representations;dictionary learning;inverse problems},\n  doi = {10.23919/EUSIPCO.2019.8903025},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529423.pdf},\n}\n\n
\n
\n\n\n
\n Learning redundant dictionaries for sparse representation from sets of patches has proven its efficiency in solving inverse problems. However, the optimization process often calls for the prior knowledge of the noise level or the regularization parameters for sparse encoding. In a Bayesian framework, these parameters are integrated within the probabilistic model through the choice of prior distributions. Although efficient, these methods come with numerical disadvantages for large-scale data. Small-variance asymptotic (SVA) approaches pave the way to much cheaper though approximate methods for inference by taking advantage from a fruitful interaction between Bayesian models and optimization algorithms. We propose such a SVA analysis of a Bayesian dictionary learning (DL) model where the noise level and regularization level are jointly estimated so that nearly no parameter tuning is needed. We analyze this algorithm and demonstrate its efficiency on real data to illustrate the relevance of the resulting dictionaries.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient Full-Rank Spatial Covariance Estimation Using Independent Low-Rank Matrix Analysis for Blind Source Separation.\n \n \n \n \n\n\n \n Kubo, Y.; Takamune, N.; Kitamura, D.; and Saruwatari, H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903026,\n  author = {Y. Kubo and N. Takamune and D. Kitamura and H. Saruwatari},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient Full-Rank Spatial Covariance Estimation Using Independent Low-Rank Matrix Analysis for Blind Source Separation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we propose a new algorithm that efficiently separates a directional source and diffuse background noise based on independent low-rank matrix analysis (ILRMA). ILRMA is one of the state-of-the-art techniques of blind source separation (BSS) and is based on a rank-1 spatial model. Although such a model does not hold for diffuse noise, ILRMA can accurately estimate the spatial parameters of the directional source. Motivated by this fact, we utilize these estimates to restore the lost spatial basis of diffuse noise, which can be considered as an efficient full-rank spatial covariance estimation. BSS experiments show the efficacy of the proposed method in terms of the computational cost and separation performance.},\n  keywords = {blind source separation;covariance matrices;efficient full-rank spatial covariance estimation;independent low-rank matrix analysis;blind source separation;directional source;diffuse background noise;ILRMA;rank-1 spatial model;diffuse noise;spatial parameters;lost spatial basis;computational cost;separation performance;Computational modeling;Estimation;Covariance matrices;Blind source separation;Computational efficiency;Microphones;Noise measurement;Blind source separation;independent low-rank matrix analysis;full-rank spatial covariance model;diffuse noise},\n  doi = {10.23919/EUSIPCO.2019.8903026},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531769.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a new algorithm that efficiently separates a directional source and diffuse background noise based on independent low-rank matrix analysis (ILRMA). ILRMA is one of the state-of-the-art techniques of blind source separation (BSS) and is based on a rank-1 spatial model. Although such a model does not hold for diffuse noise, ILRMA can accurately estimate the spatial parameters of the directional source. Motivated by this fact, we utilize these estimates to restore the lost spatial basis of diffuse noise, which can be considered as an efficient full-rank spatial covariance estimation. BSS experiments show the efficacy of the proposed method in terms of the computational cost and separation performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Path-connectedness of tensor ranks.\n \n \n \n \n\n\n \n Qi, Y.; Comon, P.; Lim, L. -.; and Ye, K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Path-connectednessPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903027,\n  author = {Y. Qi and P. Comon and L. -H. Lim and K. Ye},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Path-connectedness of tensor ranks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Computations of low-rank approximations of tensors often involve path-following optimization algorithms. In such cases, a correct solution may only be found if there exists a continuous path connecting the initial point to a desired solution. We will investigate the existence of such a path in sets of low-rank tensors for various notions of ranks, including tensor rank, border rank, multilinear rank, and their counterparts for symmetric tensors.},\n  keywords = {approximation theory;optimisation;tensors;path-connectedness;tensor rank;path-following optimization algorithms;low-rank tensors;border rank;multilinear rank;symmetric tensors;low-rank approximations;Tensors;Topology;Manifolds;Europe;Signal processing;Optimization;Tensor rank;symmetric rank;border rank;multilinear rank;symmetric multilinear rank;path connectedness},\n  doi = {10.23919/EUSIPCO.2019.8903027},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533052.pdf},\n}\n\n
\n
\n\n\n
\n Computations of low-rank approximations of tensors often involve path-following optimization algorithms. In such cases, a correct solution may only be found if there exists a continuous path connecting the initial point to a desired solution. We will investigate the existence of such a path in sets of low-rank tensors for various notions of ranks, including tensor rank, border rank, multilinear rank, and their counterparts for symmetric tensors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectrum sensing by higher-order SVM-based detection.\n \n \n \n \n\n\n \n Coluccia, A.; Fascista, A.; and Ricci, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SpectrumPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903028,\n  author = {A. Coluccia and A. Fascista and G. Ricci},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Spectrum sensing by higher-order SVM-based detection},\n  year = {2019},\n  pages = {1-5},\n  abstract = {A novel spectrum sensing algorithm based on support vector machine is proposed. The idea is to map the received signals into a multi-dimensional feature space obtained from well-known spectrum sensing statistics and their higher-order combinations. The approach has been implemented and validated on a software-defined radio testbed. Experimental results have shown the receiver operating characteristic (ROC) curve of the proposed detector can outperform classical spectrum sensing approaches without requiring knowledge of the noise variance.},\n  keywords = {cognitive radio;higher order statistics;radio spectrum management;sensitivity analysis;signal detection;software radio;support vector machines;telecommunication computing;support vector machine;received signals;multidimensional feature space;well-known spectrum sensing statistics;higher-order combinations;software-defined radio testbed;receiver operating characteristic curve;classical spectrum sensing approaches;higher-order SVM-based detection;Detectors;Support vector machines;Eigenvalues and eigenfunctions;Covariance matrices;Signal processing;Signal processing algorithms;spectrum sensing;support vector machine;detection;software-defined radio},\n  doi = {10.23919/EUSIPCO.2019.8903028},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530397.pdf},\n}\n\n
\n
\n\n\n
\n A novel spectrum sensing algorithm based on support vector machine is proposed. The idea is to map the received signals into a multi-dimensional feature space obtained from well-known spectrum sensing statistics and their higher-order combinations. The approach has been implemented and validated on a software-defined radio testbed. Experimental results have shown the receiver operating characteristic (ROC) curve of the proposed detector can outperform classical spectrum sensing approaches without requiring knowledge of the noise variance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A New Weighted NMF Algorithm For Missing Data Interpolation And Its Application To Speech Enhancement.\n \n \n \n \n\n\n \n Thakallapalli, S.; Gangashetty, S.; and Madhu, N.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903029,\n  author = {S. Thakallapalli and S. Gangashetty and N. Madhu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A New Weighted NMF Algorithm For Missing Data Interpolation And Its Application To Speech Enhancement},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper we present a novel weighted NMF (WNMF) algorithm for interpolating missing data. The proposed approach has a computational cost equivalent to that of standard NMF and, additionally, has the flexibility to control the degree of interpolation in the missing data regions. Existing WNMF methods do not offer this capability and, thereby, tend to overestimate the values in the masked regions. By constraining the estimates of the missing-data regions, the proposed approach allows for a better trade-off in the interpolation. We further demonstrate the applicability of WNMF and missing data estimation to the problem of speech enhancement. In this preliminary work, we consider the improvement obtainable by applying the proposed method to ideal binary mask-based gain functions. The instrumental quality metrics (PESQ and SNR) clearly indicate the added benefit of the missing data interpolation, compared to the output of the ideal binary mask. This preliminary work opens up novel possibilities not only in the field of speech enhancement but also, more generally, in the field of missing data interpolation using NMF.},\n  keywords = {interpolation;matrix decomposition;speech enhancement;missing data interpolation;new weighted NMF algorithm;speech enhancement;standard NMF;missing-data regions;missing data estimation;WNMF methods;ideal binary mask-based gain functions;instrumental quality metrics;PESQ;SNR;Interpolation;Matrix decomposition;Speech enhancement;Noise measurement;Manganese;Signal to noise ratio;Cost function;Weighted NMF;speech enhancement;binary mask;mask smoothing},\n  doi = {10.23919/EUSIPCO.2019.8903029},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533717.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we present a novel weighted NMF (WNMF) algorithm for interpolating missing data. The proposed approach has a computational cost equivalent to that of standard NMF and, additionally, has the flexibility to control the degree of interpolation in the missing data regions. Existing WNMF methods do not offer this capability and, thereby, tend to overestimate the values in the masked regions. By constraining the estimates of the missing-data regions, the proposed approach allows for a better trade-off in the interpolation. We further demonstrate the applicability of WNMF and missing data estimation to the problem of speech enhancement. In this preliminary work, we consider the improvement obtainable by applying the proposed method to ideal binary mask-based gain functions. The instrumental quality metrics (PESQ and SNR) clearly indicate the added benefit of the missing data interpolation, compared to the output of the ideal binary mask. This preliminary work opens up novel possibilities not only in the field of speech enhancement but also, more generally, in the field of missing data interpolation using NMF.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Graphical Schemes Designed to Display and Study the Long-term Variations of Schumann Resonance.\n \n \n \n \n\n\n \n Rodríguez-Camacho, J.; Lopera, J. F. G.; Salinas, A.; Fornieles-Callejón, J.; PortÍ, J.; Blanco-Navarro, D.; Carrión, M. C.; and Camba, E. N.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"GraphicalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903030,\n  author = {J. Rodríguez-Camacho and J. F. G. Lopera and A. Salinas and J. Fornieles-Callejón and J. PortÍ and D. Blanco-Navarro and M. C. Carrión and E. N. Camba},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Graphical Schemes Designed to Display and Study the Long-term Variations of Schumann Resonance},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This work proposes and illustrates a graphical approach aimed at studying a wide range of features of the ELF horizontal magnetic field signal recorded at the Sierra Nevada station (Spain). In addition to the traditional long-term variations in the parameters of the first three Schumann resonances (their amplitudes, central frequencies and widths), many other properties such as the saturations of the magnetometers, anomalous values for the parameters or spectra with any kind of particularities are taken into consideration in this work. These features can provide us with complementary information about the long-term variation of Schumann resonances, give an estimation of the extent up to which the results obtained are reliable and be correlated with the occurrence of lightning events or with changes in the electrical properties of the ionosphere. The scheme proposed in this work allows to instantaneously display the variations of all these features within a desired period of time.},\n  keywords = {graph theory;ionospheric electromagnetic wave propagation;magnetic fields;radiowave propagation;signal processing;long-term variation;Schumann resonance;central frequencies;graphical schemes;graphical approach;ELF horizontal magnetic field signal;Sierra Nevada station;lightning events;electrical properties;magnetometer saturation;ionosphere;Resonant frequency;Magnetometers;Lightning;Ground penetrating radar;Geophysical measurement techniques;Signal processing;Time-frequency analysis;Schumann resonance;signal processing;ELF transient events},\n  doi = {10.23919/EUSIPCO.2019.8903030},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533504.pdf},\n}\n\n
\n
\n\n\n
\n This work proposes and illustrates a graphical approach aimed at studying a wide range of features of the ELF horizontal magnetic field signal recorded at the Sierra Nevada station (Spain). In addition to the traditional long-term variations in the parameters of the first three Schumann resonances (their amplitudes, central frequencies and widths), many other properties such as the saturations of the magnetometers, anomalous values for the parameters or spectra with any kind of particularities are taken into consideration in this work. These features can provide us with complementary information about the long-term variation of Schumann resonances, give an estimation of the extent up to which the results obtained are reliable and be correlated with the occurrence of lightning events or with changes in the electrical properties of the ionosphere. The scheme proposed in this work allows to instantaneously display the variations of all these features within a desired period of time.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust Subspace Tracking with Missing Data and Outliers via ADMM.\n \n \n \n \n\n\n \n Thanh, L. T.; Dung, N. V.; Trung, N. L.; and Abed-Meraim, K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903031,\n  author = {L. T. Thanh and N. V. Dung and N. L. Trung and K. Abed-Meraim},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust Subspace Tracking with Missing Data and Outliers via ADMM},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Robust subspace tracking is crucial when dealing with data in the presence of both outliers and missing observations. In this paper, we propose a new algorithm, namely PETRELS-ADMM, to improve performance of subspace tracking in such scenarios. Outliers residing in the observed data are first detected in an efficient way and removed by the alternating direction method of multipliers (ADMM) solver. The underlying subspace is then updated by the algorithm of parallel estimation and tracking by recursive least squares (PETRELS) in which each row of the subspace matrix was estimated in parallel. Based on PETRELS-ADMM, we also derive an efficient way for robust matrix completion. Performance studies show the superiority of PETRELS-ADMM as compared to the state-of-the-art algorithms. We also illustrate its effectiveness for the application of background-foreground separation.},\n  keywords = {data handling;Gaussian processes;matrix algebra;parallel processing;robust subspace tracking;missing data;missing observations;PETRELS-ADMM;parallel estimation;subspace matrix;robust matrix completion;parallel estimation and tracking by recursive least squares;background-foreground separation;alternating direction method of multipliers;Signal processing algorithms;Principal component analysis;Convergence;Convex functions;Estimation;Cost function;Europe;Robust subspace tracking;robust PCA;robust matrix completion;missing data;outliers;alternating direction method of multipliers (ADMM).},\n  doi = {10.23919/EUSIPCO.2019.8903031},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529929.pdf},\n}\n\n
\n
\n\n\n
\n Robust subspace tracking is crucial when dealing with data in the presence of both outliers and missing observations. In this paper, we propose a new algorithm, namely PETRELS-ADMM, to improve performance of subspace tracking in such scenarios. Outliers residing in the observed data are first detected in an efficient way and removed by the alternating direction method of multipliers (ADMM) solver. The underlying subspace is then updated by the algorithm of parallel estimation and tracking by recursive least squares (PETRELS) in which each row of the subspace matrix was estimated in parallel. Based on PETRELS-ADMM, we also derive an efficient way for robust matrix completion. Performance studies show the superiority of PETRELS-ADMM as compared to the state-of-the-art algorithms. We also illustrate its effectiveness for the application of background-foreground separation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spatio-Temporal Waveform Design in Active Sensing Systems with Multilayer Targets.\n \n \n \n \n\n\n \n Kariminezhad, A.; and Sezgin, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Spatio-TemporalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903033,\n  author = {A. Kariminezhad and A. Sezgin},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Spatio-Temporal Waveform Design in Active Sensing Systems with Multilayer Targets},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we study the optimal spatio-temporal waveform design for active sensing applications. For this purpose a multi-antenna radar is exploited. The targets in the radar vision are naturally composed of multiple layers of different materials. Therefore, the interaction of these layers with the incident wave effects targets detection and classification. In order to enhance the quality of detection, we propose to exploit space-time waveforms which adapt with the targets multilayer response. We consider the backscattered signal power as the utility function to be maximized. The backscattered signal power maximization under transmit signal power constraint is formulated as a semidefinite program (SDP). First, we assume a single-target scenario, where the resulting SDP yields an analytical solution. Second, we study the optimal waveform which considers the angle uncertainties of a target in the presence of a clutter. Third, having multiple targets and multiple clutters, the weighted sum of the backscattered signals power from the targets is maximized to deliver the backscattered power region outermost boundary. We observe that, when the targets material is given, the backscattered signal power can be significantly increased by optimal spatio-temporal waveform design. Moreover, we observe that by utilizing multiple temporal dimensions in the waveform design process, the number of exploited antennas can be significantly decreased.},\n  keywords = {antenna arrays;mathematical programming;object detection;radar antennas;radar clutter;radar receivers;sensors;active sensing systems;active sensing applications;multiantenna radar;radar vision;space-time waveforms;backscattered signal power maximization;single-target scenario;multiple temporal dimensions;waveform design process;signal power backscattering;signal power constraint transmission;optimal spatiotemporal waveform design;multilayer target response;incident wave effect target detection;incident wave effect target classification;SDP;semidefinite programming;Nonhomogeneous media;Radar;Uncertainty;Clutter;Surface impedance;Covariance matrices;Radar antennas;Spatio-temporal waveform;target material response;semidefinite program;Pareto boundary;uncertainty region},\n  doi = {10.23919/EUSIPCO.2019.8903033},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530375.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we study the optimal spatio-temporal waveform design for active sensing applications. For this purpose a multi-antenna radar is exploited. The targets in the radar vision are naturally composed of multiple layers of different materials. Therefore, the interaction of these layers with the incident wave effects targets detection and classification. In order to enhance the quality of detection, we propose to exploit space-time waveforms which adapt with the targets multilayer response. We consider the backscattered signal power as the utility function to be maximized. The backscattered signal power maximization under transmit signal power constraint is formulated as a semidefinite program (SDP). First, we assume a single-target scenario, where the resulting SDP yields an analytical solution. Second, we study the optimal waveform which considers the angle uncertainties of a target in the presence of a clutter. Third, having multiple targets and multiple clutters, the weighted sum of the backscattered signals power from the targets is maximized to deliver the backscattered power region outermost boundary. We observe that, when the targets material is given, the backscattered signal power can be significantly increased by optimal spatio-temporal waveform design. Moreover, we observe that by utilizing multiple temporal dimensions in the waveform design process, the number of exploited antennas can be significantly decreased.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Scale-discretised ridgelet transform on the sphere.\n \n \n \n \n\n\n \n McEwen, J. D.; and Price, M. A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Scale-discretisedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903034,\n  author = {J. D. McEwen and M. A. Price},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Scale-discretised ridgelet transform on the sphere},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We revisit the spherical Radon transform, also called the Funk-Radon transform, viewing it as an axisymmetric convolution on the sphere. Viewing the spherical Radon transform in this manner leads to a straightforward derivation of its spherical harmonic representation, from which we show the spherical Radon transform can be inverted exactly for signals exhibiting antipodal symmetry. We then construct a spherical ridgelet transform by composing the spherical Radon and scale-discretised wavelet transforms on the sphere. The resulting spherical ridgelet transform also admits exact inversion for antipodal signals. The restriction to antipodal signals is expected since the spherical Radon and ridgelet transforms themselves result in signals that exhibit antipodal symmetry. Our ridgelet transform is defined natively on the sphere, probes signal content globally along great circles, does not exhibit blocking artefacts, supports spin signals and exhibits an exact and explicit inverse transform. No alternative ridgelet construction on the sphere satisfies all of these properties. Our implementation of the spherical Radon and ridgelet transforms is made publicly available. Finally, we illustrate the effectiveness of spherical ridgelets for diffusion magnetic resonance imaging of white matter fibers in the brain.},\n  keywords = {biodiffusion;biomedical MRI;brain;mathematics computing;medical image processing;Radon transforms;wavelet transforms;spherical Radon transform;scale-discretised wavelet transforms;resulting spherical ridgelet;antipodal signals;ridgelet transforms;scale-discretised ridgelet;Funk-Radon;spherical harmonic representation;antipodal symmetry;white matter fibers;brain;diffusion magnetic resonance imaging;Radon;Harmonic analysis;Wavelet transforms;Convolution;Magnetic resonance imaging;Laplace equations;Harmonic analysis;spheres;spherical Radon transform;Funk Radon transform;spherical wavelets;spherical ridgelets},\n  doi = {10.23919/EUSIPCO.2019.8903034},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570526731.pdf},\n}\n\n
\n
\n\n\n
\n We revisit the spherical Radon transform, also called the Funk-Radon transform, viewing it as an axisymmetric convolution on the sphere. Viewing the spherical Radon transform in this manner leads to a straightforward derivation of its spherical harmonic representation, from which we show the spherical Radon transform can be inverted exactly for signals exhibiting antipodal symmetry. We then construct a spherical ridgelet transform by composing the spherical Radon and scale-discretised wavelet transforms on the sphere. The resulting spherical ridgelet transform also admits exact inversion for antipodal signals. The restriction to antipodal signals is expected since the spherical Radon and ridgelet transforms themselves result in signals that exhibit antipodal symmetry. Our ridgelet transform is defined natively on the sphere, probes signal content globally along great circles, does not exhibit blocking artefacts, supports spin signals and exhibits an exact and explicit inverse transform. No alternative ridgelet construction on the sphere satisfies all of these properties. Our implementation of the spherical Radon and ridgelet transforms is made publicly available. Finally, we illustrate the effectiveness of spherical ridgelets for diffusion magnetic resonance imaging of white matter fibers in the brain.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Acoustic Source Position Estimation Based On Multi-Feature Gaussian Processes.\n \n \n \n \n\n\n \n Brendel, A.; Altmann, I.; and Kellermann, W.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AcousticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903035,\n  author = {A. Brendel and I. Altmann and W. Kellermann},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Acoustic Source Position Estimation Based On Multi-Feature Gaussian Processes},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Gaussian Processes, representing a Bayesian frame-work for regression, were already previously shown to allow effective range estimation in highly reverberant and noisy scenarios from a single pair of microphones when using the Coherent-to-Diffuse Power Ratio as a feature. In this work we investigate how Gaussian Process regression can jointly estimate range and Direction of Arrival by using the Coherent-to-Diffuse Power Ratio and an additional Direction of Arrival estimation feature (e.g., MUSIC) to achieve an estimate of the source position, based on a single concentrated array requiring only two sensors as a minimum.},\n  keywords = {acoustic signal processing;Bayes methods;direction-of-arrival estimation;Gaussian processes;regression analysis;reverberation;acoustic source position estimation;multifeature Gaussian Processes;Bayesian frame-work;effective range estimation;highly reverberant scenarios;noisy scenarios;Coherent-to-Diffuse Power Ratio;Gaussian Process regression;estimate range;additional Direction;Arrival estimation feature;single concentrated array;Estimation;Direction-of-arrival estimation;Microphones;Acoustics;Training;Multiple signal classification;Gaussian process regression;acoustic source localization},\n  doi = {10.23919/EUSIPCO.2019.8903035},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528006.pdf},\n}\n\n
\n
\n\n\n
\n Gaussian Processes, representing a Bayesian frame-work for regression, were already previously shown to allow effective range estimation in highly reverberant and noisy scenarios from a single pair of microphones when using the Coherent-to-Diffuse Power Ratio as a feature. In this work we investigate how Gaussian Process regression can jointly estimate range and Direction of Arrival by using the Coherent-to-Diffuse Power Ratio and an additional Direction of Arrival estimation feature (e.g., MUSIC) to achieve an estimate of the source position, based on a single concentrated array requiring only two sensors as a minimum.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n How to Apply Random Projections to Nonnegative Matrix Factorization with Missing Entries?.\n \n \n \n \n\n\n \n Yahaya, F.; Puigt, M.; Delmaire, G.; and Roussel, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"HowPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903036,\n  author = {F. Yahaya and M. Puigt and G. Delmaire and G. Roussel},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {How to Apply Random Projections to Nonnegative Matrix Factorization with Missing Entries?},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Random projections belong to the major techniques to process big data and have been successfully applied to Nonnegative Matrix Factorization (NMF). However, they cannot be applied in the case of missing entries in the matrix to factorize, which occurs in many actual problems with large data matrices. In this paper, we thus aim to solve this issue and we propose a novel framework to apply random projections in weighted NMF, where the weight models the confidence in the data (or the absence of confidence in the case of missing data). We experimentally show the proposed framework to significantly speed-up state-of-the-art NMF methods under some mild conditions. In particular, the proposed strategy is particularly efficient when combined with Nesterov gradient or alternating least squares.},\n  keywords = {Big Data;gradient methods;least squares approximations;matrix decomposition;NMF methods;apply random projections;Nonnegative Matrix Factorization;missing entries;big data;data matrices;weighted NMF;missing data;Economic indicators;Signal processing algorithms;Estimation;Matrix decomposition;Europe;Signal processing;Sparse matrices;Nonnegative matrix factorization;missing data;random projections;low-rank matrix completion;blind source separation;big data},\n  doi = {10.23919/EUSIPCO.2019.8903036},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533937.pdf},\n}\n\n
\n
\n\n\n
\n Random projections belong to the major techniques to process big data and have been successfully applied to Nonnegative Matrix Factorization (NMF). However, they cannot be applied in the case of missing entries in the matrix to factorize, which occurs in many actual problems with large data matrices. In this paper, we thus aim to solve this issue and we propose a novel framework to apply random projections in weighted NMF, where the weight models the confidence in the data (or the absence of confidence in the case of missing data). We experimentally show the proposed framework to significantly speed-up state-of-the-art NMF methods under some mild conditions. In particular, the proposed strategy is particularly efficient when combined with Nesterov gradient or alternating least squares.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Matrix Cofactorization for Joint Unmixing and Classification of Hyperspectral Images.\n \n \n \n \n\n\n \n Lagrange, A.; Fauvel, M.; May, S.; Bioucas-Dias, J. M.; and Dobigeon, N.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 01-05, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"MatrixPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903037,\n  author = {A. Lagrange and M. Fauvel and S. May and J. M. Bioucas-Dias and N. Dobigeon},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Matrix Cofactorization for Joint Unmixing and Classification of Hyperspectral Images},\n  year = {2019},\n  pages = {01-05},\n  abstract = {This paper introduces a matrix cofactorization approach to perform spectral unmixing and classification jointly. After formulating the unmixing and classification tasks as matrix factorization problems, a link is introduced between the two coding matrices, namely the abundance matrix and the feature matrix. This coupling term can be interpreted as a clustering term where the abundance vectors are clustered and the resulting attribution vectors are then used as feature vectors. The overall non-smooth, non-convex optimization problem is solved using a proximal alternating linearized minimization algorithm (PALM) ensuring convergence to a critical point. The quality of the obtained results is finally assessed by comparison to other conventional algorithms on semi-synthetic yet realistic dataset.},\n  keywords = {convergence of numerical methods;convex programming;feature extraction;image classification;iterative methods;least squares approximations;matrix decomposition;minimisation;joint unmixing;hyperspectral images;matrix cofactorization approach;unmixing classification;matrix factorization problems;abundance matrix;feature matrix;coupling term;clustering term;abundance vectors;feature vectors;nonconvex optimization problem;attribution vectors;proximal alternating linearized minimization algorithm;PALM algorithm;Optimization;Couplings;Task analysis;Indexes;Hyperspectral imaging;Clustering algorithms;supervised learning;spectral unmixing;cofactorization;hyperspectral images},\n  doi = {10.23919/EUSIPCO.2019.8903037},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531255.pdf},\n}\n\n
\n
\n\n\n
\n This paper introduces a matrix cofactorization approach to perform spectral unmixing and classification jointly. After formulating the unmixing and classification tasks as matrix factorization problems, a link is introduced between the two coding matrices, namely the abundance matrix and the feature matrix. This coupling term can be interpreted as a clustering term where the abundance vectors are clustered and the resulting attribution vectors are then used as feature vectors. The overall non-smooth, non-convex optimization problem is solved using a proximal alternating linearized minimization algorithm (PALM) ensuring convergence to a critical point. The quality of the obtained results is finally assessed by comparison to other conventional algorithms on semi-synthetic yet realistic dataset.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Quantifying Uncertainty in High Dimensional Inverse Problems by Convex Optimisation.\n \n \n \n \n\n\n \n Cai, X.; Pereyra, M.; and McEwen, J. D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"QuantifyingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903038,\n  author = {X. Cai and M. Pereyra and J. D. McEwen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Quantifying Uncertainty in High Dimensional Inverse Problems by Convex Optimisation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Inverse problems play a key role in modern image/signal processing methods. However, since they are generally ill-conditioned or ill-posed due to lack of observations, their solutions may have significant intrinsic uncertainty. Analysing and quantifying this uncertainty is very challenging, particularly in high-dimensional problems and problems with non-smooth objective functionals (e.g. sparsity-promoting priors). In this article, a series of strategies to visualise this uncertainty are presented, e.g. highest posterior density credible regions, and local credible intervals (cf. error bars) for individual pixels and superpixels. Our methods support non-smooth priors for inverse problems and can be scaled to high-dimensional settings. Moreover, we present strategies to automatically set regularisation parameters so that the proposed uncertainty quantification (UQ) strategies become much easier to use. Also, different kinds of dictionaries (complete and over-complete) are used to represent the image/signal and their performance in the proposed UQ methodology is investigated.},\n  keywords = {Bayes methods;convex programming;inverse problems;sampling methods;signal processing;high-dimensional settings;uncertainty quantification strategies;high dimensional inverse problems;convex optimisation;high-dimensional problems;nonsmooth objective functionals;sparsity-promoting priors;highest posterior density credible regions;local credible intervals;Estimation;Uncertainty;Inverse problems;Bayes methods;Dictionaries;Optimization;Europe;Uncertainty quantification;image/signal processing;inverse problem;Bayesian inference;convex optimisation},\n  doi = {10.23919/EUSIPCO.2019.8903038},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570526876.pdf},\n}\n\n
\n
\n\n\n
\n Inverse problems play a key role in modern image/signal processing methods. However, since they are generally ill-conditioned or ill-posed due to lack of observations, their solutions may have significant intrinsic uncertainty. Analysing and quantifying this uncertainty is very challenging, particularly in high-dimensional problems and problems with non-smooth objective functionals (e.g. sparsity-promoting priors). In this article, a series of strategies to visualise this uncertainty are presented, e.g. highest posterior density credible regions, and local credible intervals (cf. error bars) for individual pixels and superpixels. Our methods support non-smooth priors for inverse problems and can be scaled to high-dimensional settings. Moreover, we present strategies to automatically set regularisation parameters so that the proposed uncertainty quantification (UQ) strategies become much easier to use. Also, different kinds of dictionaries (complete and over-complete) are used to represent the image/signal and their performance in the proposed UQ methodology is investigated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fully Proximal Splitting Algorithms In Image Recovery.\n \n \n \n \n\n\n \n Combettes, P. L.; and Glaudin, L. E.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FullyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903039,\n  author = {P. L. Combettes and L. E. Glaudin},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Fully Proximal Splitting Algorithms In Image Recovery},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Structured convex optimization problems in image recovery typically involve a mix of smooth and nonsmooth functions. The common practice is to activate the smooth functions via their gradient and the nonsmooth ones via their proximity operator. We show that, although intuitively natural, this approach is not necessarily the most efficient numerically and that, in particular, activating all the functions proximally may be advantageous. To make this viewpoint viable computationally, we derive a number of new examples of proximity operators of smooth convex functions arising in applications.},\n  keywords = {convex programming;image processing;nonsmooth functions;smooth functions;proximity operator;smooth convex functions;image recovery;structured convex optimization problems;fully proximal splitting algorithms;Signal processing algorithms;Convex functions;Signal processing;Convergence;Europe;Gradient methods;convex optimization;image recovery;nonsmooth optimization;proximal splitting algorithm;proximity operator},\n  doi = {10.23919/EUSIPCO.2019.8903039},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533861.pdf},\n}\n\n
\n
\n\n\n
\n Structured convex optimization problems in image recovery typically involve a mix of smooth and nonsmooth functions. The common practice is to activate the smooth functions via their gradient and the nonsmooth ones via their proximity operator. We show that, although intuitively natural, this approach is not necessarily the most efficient numerically and that, in particular, activating all the functions proximally may be advantageous. To make this viewpoint viable computationally, we derive a number of new examples of proximity operators of smooth convex functions arising in applications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n CNN-based virtual microphone signal estimation for MPDR beamforming in underdetermined situations.\n \n \n \n \n\n\n \n Yamaoka, K.; Li, L.; Ono, N.; Makino, S.; and Yamada, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CNN-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903040,\n  author = {K. Yamaoka and L. Li and N. Ono and S. Makino and T. Yamada},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {CNN-based virtual microphone signal estimation for MPDR beamforming in underdetermined situations},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we propose a novel approach to virtually increasing the number of microphone elements between two real microphones to improve speech enhancement performance in underdetermined situations. The virtual microphone technique, with which virtual signals in the audio signal domain are estimated by linearly interpolating the phase and nonlinearly interpolating the amplitude independently on the basis of β-divergence, has been recently proposed and experimentally shown to be effective in improving speech enhancement performance. Furthermore, it has been reported that the performance tends to improve as the nonlinearity is improved. However, one drawback of this method is that the interpolation is employed in each time-frequency bin independently, in which the spectral and temporal structures of speech signals are ignored. To address this problem and improve the nonlinearity, motivated by the high capability of neural networks to model nonlinear functions and speech spectrograms, in this paper, we propose an alternative method of amplitude interpolation. In this method, we employ a convolutional neural network as an amplitude estimator that minimizes the mean squared error between the outputs of a minimum power distortionless response (MPDR) beamformer and the target speech signals. The experimental results revealed that the proposed method showed high potential for improving speech enhancement performance, which was not only superior to that of the conventional virtual microphone technique but also the performance in the corresponding determined situation.},\n  keywords = {convolutional neural nets;interpolation;microphones;speech enhancement;nonlinear functions;speech spectrograms;amplitude interpolation;amplitude estimator;minimum power distortionless response beamformer;MPDR;target speech signals;speech enhancement performance;virtual microphone technique;CNN-based virtual microphone signal estimation;microphone elements;microphones;virtual signals;audio signal domain;linearly interpolating;nonlinearity;Microphones;Interpolation;Speech enhancement;Time-frequency analysis;Spectrogram;Logic gates},\n  doi = {10.23919/EUSIPCO.2019.8903040},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533075.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a novel approach to virtually increasing the number of microphone elements between two real microphones to improve speech enhancement performance in underdetermined situations. The virtual microphone technique, with which virtual signals in the audio signal domain are estimated by linearly interpolating the phase and nonlinearly interpolating the amplitude independently on the basis of β-divergence, has been recently proposed and experimentally shown to be effective in improving speech enhancement performance. Furthermore, it has been reported that the performance tends to improve as the nonlinearity is improved. However, one drawback of this method is that the interpolation is employed in each time-frequency bin independently, in which the spectral and temporal structures of speech signals are ignored. To address this problem and improve the nonlinearity, motivated by the high capability of neural networks to model nonlinear functions and speech spectrograms, in this paper, we propose an alternative method of amplitude interpolation. In this method, we employ a convolutional neural network as an amplitude estimator that minimizes the mean squared error between the outputs of a minimum power distortionless response (MPDR) beamformer and the target speech signals. The experimental results revealed that the proposed method showed high potential for improving speech enhancement performance, which was not only superior to that of the conventional virtual microphone technique but also the performance in the corresponding determined situation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast Alignment of Limited Angle Tomograms by projected Cross Correlation.\n \n \n \n \n\n\n \n Sánchez, R. M.; Mester, R.; and Kudryashev, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903041,\n  author = {R. M. Sánchez and R. Mester and M. Kudryashev},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast Alignment of Limited Angle Tomograms by projected Cross Correlation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Volume alignment is a computationally intensive task. In Subtomogram Averaging (StA) from electron cryotomograms (CryoET), thousands of subtomograms are aligned to a reference, which may take hours until days of computational time. CryoET datasets contain a limited number of noisy projections, with very low signal-to-until ratio (SNR). The noisy subtomograms are aligned to a reference using cross-correlation, an operation that can be optimized when working with limited angle tomograms (LAT), as they are sparse in Fourier space. We propose a projected cross-correlation (pCC) algorithm, a faster approach to computing the cross-correlation between a limited angle (sub)-tomogram and a given reference, and we use pCC to design a new procedure for volume alignment. pCC employs the projections to calculate the cross-correlation with lower computational complexity, as it works with a set 2D projections instead of volumes. With this, we propose the Substacks Averaging (SsA) method as an alternative to the conventional Subtomogram Averaging (StA). Our results on test data shows that SsA is considerably faster than the reference StA implementation: for 41 projections (k= 41) and N=200, the SsA is 35 times faster, and for N=320, is 150 times faster. Furthermore, SsA results in higher precision of alignment of subtomograms at different noise levels.},\n  keywords = {computerised tomography;fast Fourier transforms;image reconstruction;iterative methods;medical image processing;limited angle tomograms;projected cross correlation;volume alignment;computationally intensive task;computational time;CryoET datasets;noisy projections;noisy subtomograms;projected cross-correlation algorithm;pCC;angle-tomogram;subtomogram averaging;signal-to-until ratio;Signal processing algorithms;Two dimensional displays;Mathematical model;Signal to noise ratio;Computational complexity;Three-dimensional displays;Europe},\n  doi = {10.23919/EUSIPCO.2019.8903041},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533917.pdf},\n}\n\n
\n
\n\n\n
\n Volume alignment is a computationally intensive task. In Subtomogram Averaging (StA) from electron cryotomograms (CryoET), thousands of subtomograms are aligned to a reference, which may take hours until days of computational time. CryoET datasets contain a limited number of noisy projections, with very low signal-to-until ratio (SNR). The noisy subtomograms are aligned to a reference using cross-correlation, an operation that can be optimized when working with limited angle tomograms (LAT), as they are sparse in Fourier space. We propose a projected cross-correlation (pCC) algorithm, a faster approach to computing the cross-correlation between a limited angle (sub)-tomogram and a given reference, and we use pCC to design a new procedure for volume alignment. pCC employs the projections to calculate the cross-correlation with lower computational complexity, as it works with a set 2D projections instead of volumes. With this, we propose the Substacks Averaging (SsA) method as an alternative to the conventional Subtomogram Averaging (StA). Our results on test data shows that SsA is considerably faster than the reference StA implementation: for 41 projections (k= 41) and N=200, the SsA is 35 times faster, and for N=320, is 150 times faster. Furthermore, SsA results in higher precision of alignment of subtomograms at different noise levels.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Understanding Support Vector Machines with Polynomial Kernels.\n \n \n \n \n\n\n \n Vinge, R.; and McKelvey, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"UnderstandingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903042,\n  author = {R. Vinge and T. McKelvey},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Understanding Support Vector Machines with Polynomial Kernels},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Interpreting models learned by a support vector machine (SVM) is often difficult, if not impossible, due to working in high-dimensional spaces. In this paper, we present an investigation into polynomial kernels for the SVM. We show that the models learned by these machines are constructed from terms related to the statistical moments of the support vectors. This allows us to deepen our understanding of the internal workings of these models and, for example, gauge the importance of combinations of features. We also discuss how the SVM with a quadratic kernel is related to the likelihood-ratio test for normally distributed populations.},\n  keywords = {learning (artificial intelligence);normal distribution;polynomials;support vector machines;statistical moments;support vectors;SVM;quadratic kernel;polynomial kernels;support vector machine;likelihood-ratio test;Support vector machines;Kernel;Correlation;Machine learning algorithms;Europe;Signal processing;Covariance matrices;Interpretation;Support Vector Machine;Polynomial Kernel;Statistical Moments;Likelihood Ratio Test;Quadratic Discrimination},\n  doi = {10.23919/EUSIPCO.2019.8903042},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533801.pdf},\n}\n\n
\n
\n\n\n
\n Interpreting models learned by a support vector machine (SVM) is often difficult, if not impossible, due to working in high-dimensional spaces. In this paper, we present an investigation into polynomial kernels for the SVM. We show that the models learned by these machines are constructed from terms related to the statistical moments of the support vectors. This allows us to deepen our understanding of the internal workings of these models and, for example, gauge the importance of combinations of features. We also discuss how the SVM with a quadratic kernel is related to the likelihood-ratio test for normally distributed populations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Malware Identification with Dictionary Learning.\n \n \n \n \n\n\n \n Irofti, P.; and Băltoiu, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"MalwarePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903043,\n  author = {P. Irofti and A. Băltoiu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Malware Identification with Dictionary Learning},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Malware identification is a difficult task that has been recently approached by training classifiers through machine learning. We present here a low complexity semi-supervised dictionary learning framework that begins with training an initial dictionary on a small labeled data set, and then continues with online learning on incoming unlabeled data, making use of every sample that it is exposed to, with the scope of adapting to new and unknown malware types. Our main contribution is a new online algorithm that makes use of regularization techniques that balance the capability of the dictionary to express both fresh and well established patterns.},\n  keywords = {dictionaries;invasive software;learning (artificial intelligence);pattern classification;malware identification;dictionary learning;training classifiers;machine learning;low complexity semisupervised dictionary;initial dictionary;labeled data set;incoming unlabeled data;unknown malware types;online algorithm;Dictionaries;Malware;Machine learning;Training;Task analysis;Signal processing algorithms;Performance evaluation;malware identification;online semisupervised learning;dictionary learning;sparse representations},\n  doi = {10.23919/EUSIPCO.2019.8903043},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533803.pdf},\n}\n\n
\n
\n\n\n
\n Malware identification is a difficult task that has been recently approached by training classifiers through machine learning. We present here a low complexity semi-supervised dictionary learning framework that begins with training an initial dictionary on a small labeled data set, and then continues with online learning on incoming unlabeled data, making use of every sample that it is exposed to, with the scope of adapting to new and unknown malware types. Our main contribution is a new online algorithm that makes use of regularization techniques that balance the capability of the dictionary to express both fresh and well established patterns.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n GestureKeeper: Gesture Recognition for Controlling Devices in IoT Environments.\n \n \n \n \n\n\n \n Sideridis, V.; Zacharakis, A.; Tzagkarakis, G.; and Papadopouli, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"GestureKeeper:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903044,\n  author = {V. Sideridis and A. Zacharakis and G. Tzagkarakis and M. Papadopouli},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {GestureKeeper: Gesture Recognition for Controlling Devices in IoT Environments},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper introduces and evaluates the Gesture-Keeper, a robust hand-gesture recognition system based on a wearable inertial measurements unit (IMU). The identification of the time windows where the gestures occur, without relying on an explicit user action or a special gesture marker, is a very challenging task. To address this problem, GestureKeeper identifies the start of a gesture by exploiting the underlying dynamics of the associated time series using a recurrence quantification analysis (RQA). RQA is a powerful method for nonlinear time-series analysis, which enables the detection of critical transitions in the system's dynamical behavior. Most importantly, it does not make any assumption about the underlying distribution or model that governs the data. Having estimated the gesture window, a support vector machine is employed to recognize the specific gesture. Our proposed method is evaluated by means of a small-scale pilot study at FORTH and demonstrated that GestureKeeper can identify correctly the start of a gesture with a 87% mean balanced accuracy and classify correctly the specific hand-gesture with a mean accuracy of over 96%. To the best of our knowledge, GestureKeeper is the first automatic hand-gesture identification system based only on accelerometer. The performance analysis reveals the predictive power of the features and the system's robustness in the presence of additive noise. We also performed a sensitivity analysis to examine the impact of various parameters and a comparative analysis of different classifiers (SVM, random forests). Most importantly, the system can be extended to incorporate a large dictionary of gestures and operate without further calibration for a new user.},\n  keywords = {accelerometers;gesture recognition;sensitivity analysis;support vector machines;time series;automatic hand-gesture identification system;robust hand-gesture recognition system;wearable inertial measurements unit;special gesture marker;time series;recurrence quantification analysis;nonlinear time-series analysis;gesture window;IoT environment;gesture-keeper;critical transitions detection;support vector machine;sensitivity analysis;Support vector machines;Trajectory;Time series analysis;Sensors;Acceleration;Dictionaries;Microsoft Windows;Hand-gesture identification and recognition;inertial measurement unit;support vector machine;recurrence quantification analysis},\n  doi = {10.23919/EUSIPCO.2019.8903044},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533743.pdf},\n}\n\n
\n
\n\n\n
\n This paper introduces and evaluates the Gesture-Keeper, a robust hand-gesture recognition system based on a wearable inertial measurements unit (IMU). The identification of the time windows where the gestures occur, without relying on an explicit user action or a special gesture marker, is a very challenging task. To address this problem, GestureKeeper identifies the start of a gesture by exploiting the underlying dynamics of the associated time series using a recurrence quantification analysis (RQA). RQA is a powerful method for nonlinear time-series analysis, which enables the detection of critical transitions in the system's dynamical behavior. Most importantly, it does not make any assumption about the underlying distribution or model that governs the data. Having estimated the gesture window, a support vector machine is employed to recognize the specific gesture. Our proposed method is evaluated by means of a small-scale pilot study at FORTH and demonstrated that GestureKeeper can identify correctly the start of a gesture with a 87% mean balanced accuracy and classify correctly the specific hand-gesture with a mean accuracy of over 96%. To the best of our knowledge, GestureKeeper is the first automatic hand-gesture identification system based only on accelerometer. The performance analysis reveals the predictive power of the features and the system's robustness in the presence of additive noise. We also performed a sensitivity analysis to examine the impact of various parameters and a comparative analysis of different classifiers (SVM, random forests). Most importantly, the system can be extended to incorporate a large dictionary of gestures and operate without further calibration for a new user.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Complete Framework of Radar Pulse Detection and Modulation Classification for Cognitive EW.\n \n \n \n \n\n\n \n Yar, E.; Kocamis, M. B.; Orduyilmaz, A.; Serin, M.; and Efe, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903045,\n  author = {E. Yar and M. B. Kocamis and A. Orduyilmaz and M. Serin and M. Efe},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Complete Framework of Radar Pulse Detection and Modulation Classification for Cognitive EW},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we consider automatic radar pulse detection and intra-pulse modulation classification for cognitive electronic warfare applications. In this manner, we introduce an end-to-end framework for detection and classification of radar pulses. Our approach is complete, i.e., we provide raw radar signal at the input side and produce categorical output at the output. We use short time Fourier transform to obtain time-frequency image of the signal. Hough transform is used to detect pulses in time-frequency images and pulses are represented with a single line. Then, convolutional neural networks are used for pulse classification. In experiments, we provide classification results at different SNR levels.},\n  keywords = {convolutional neural nets;electronic warfare;Fourier transforms;Hough transforms;military radar;pulse modulation;radar computing;radar detection;radar signal processing;signal classification;time-frequency analysis;cognitive EW;automatic radar pulse detection;intra-pulse modulation classification;cognitive electronic warfare applications;end-to-end framework;categorical output;short time Fourier transform;time-frequency image;radar pulse detection;radar pulse classification;raw radar signal classification;convolutional neural networks;Hough transform;Radar detection;Signal to noise ratio;Feature extraction;Radar imaging;Frequency modulation;Cognitive EW;pulse detection;intra-pulse modulation classification;convolutional neural networks},\n  doi = {10.23919/EUSIPCO.2019.8903045},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533677.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we consider automatic radar pulse detection and intra-pulse modulation classification for cognitive electronic warfare applications. In this manner, we introduce an end-to-end framework for detection and classification of radar pulses. Our approach is complete, i.e., we provide raw radar signal at the input side and produce categorical output at the output. We use short time Fourier transform to obtain time-frequency image of the signal. Hough transform is used to detect pulses in time-frequency images and pulses are represented with a single line. Then, convolutional neural networks are used for pulse classification. In experiments, we provide classification results at different SNR levels.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Learning of Image Dehazing Models for Segmentation Tasks.\n \n \n \n \n\n\n \n d. Blois, S.; Hedhli, I.; and Gagné, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"LearningPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903046,\n  author = {S. d. Blois and I. Hedhli and C. Gagné},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Learning of Image Dehazing Models for Segmentation Tasks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {To evaluate their performance, existing dehazing approaches generally rely on distance measures between the generated image and its corresponding ground truth. Despite its ability to produce visually good images, using pixel-based or even perceptual metrics does not guarantee, in general, that the produced image is fit for being used as input for low-level computer vision tasks such as segmentation. To overcome this weakness, we are proposing a novel end-to-end approach for image dehazing, fit for being used as input to an image segmentation procedure, while maintaining the visual quality of the generated images. Inspired by the success of Generative Adversarial Networks (GAN), we propose to optimize the generator by introducing a discriminator network and a loss function that evaluates segmentation quality of dehazed images. In addition, we make use of a supplementary loss function that verifies that the visual and the perceptual quality of the generated image are preserved in hazy conditions. Results obtained using the proposed technique are appealing, with a favorable comparison to state-of-the-art approaches when considering the performance of segmentation algorithms on the hazy images.},\n  keywords = {computer vision;image colour analysis;image enhancement;image restoration;image segmentation;neural nets;image segmentation procedure;visual quality;generative adversarial networks;hazy images;image dehazing models;low-level computer vision tasks;end-to-end approach;discriminator network;supplementary loss function;Image segmentation;Training;Computational modeling;Generators;Task analysis;Measurement;Testing;Dehazing;Image segmentation;Deep neural network;Generative models},\n  doi = {10.23919/EUSIPCO.2019.8903046},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533742.pdf},\n}\n\n
\n
\n\n\n
\n To evaluate their performance, existing dehazing approaches generally rely on distance measures between the generated image and its corresponding ground truth. Despite its ability to produce visually good images, using pixel-based or even perceptual metrics does not guarantee, in general, that the produced image is fit for being used as input for low-level computer vision tasks such as segmentation. To overcome this weakness, we are proposing a novel end-to-end approach for image dehazing, fit for being used as input to an image segmentation procedure, while maintaining the visual quality of the generated images. Inspired by the success of Generative Adversarial Networks (GAN), we propose to optimize the generator by introducing a discriminator network and a loss function that evaluates segmentation quality of dehazed images. In addition, we make use of a supplementary loss function that verifies that the visual and the perceptual quality of the generated image are preserved in hazy conditions. Results obtained using the proposed technique are appealing, with a favorable comparison to state-of-the-art approaches when considering the performance of segmentation algorithms on the hazy images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Generalized Gamma Distribution SAR Sea Clutter Modelling for Oil Spill Candidates Detection.\n \n \n \n \n\n\n \n Benito-Ortiz, M. -.; Mata-Moya, D.; Jarabo-Amores, M. -.; d. Rey-Maestre, N.; and Gomez-del-Hoyo, P. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"GeneralizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903047,\n  author = {M. -C. Benito-Ortiz and D. Mata-Moya and M. -P. Jarabo-Amores and N. d. Rey-Maestre and P. -J. Gomez-del-Hoyo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Generalized Gamma Distribution SAR Sea Clutter Modelling for Oil Spill Candidates Detection},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper tackles the oil spills offshore monitoring using satellite Earth Observation tools based on Synthetic Aperture Radar (SAR) sensors. The proposed processing scheme is based on modelling SAR sea backscattering assuming a Generalized Gamma Distribution clutter. The signal processing scheme includes a first stage to define the non-homogeneous area due to the presence of dark spots in function of the multi-scale estimations of a textural parameter defined as the inverse of the product between shape and scale sea clutter parameters. After an statistical study of this parameter, a robust value can be defined for comparison purposes. In the resulted search area, an adaptive thresholding is performed to obtain a segmented image with the oil slicks candidates contouring at pixel level. Results obtained with SAR images acquired by Sentinel-1 over Corsica, confirm the suitability of the proposed methodology.},\n  keywords = {filtering theory;gamma distribution;geophysical signal processing;image segmentation;marine pollution;oceanographic techniques;radar clutter;radar imaging;remote sensing by radar;synthetic aperture radar;signal processing scheme;nonhomogeneous area;multiscale estimations;textural parameter;scale sea clutter parameters;oil slicks candidates;SAR images;Generalized gamma distribution SAR sea clutter;oil spill candidates detection;offshore monitoring;Synthetic Aperture Radar sensors;generalized gamma distribution clutter;SAR sea backscattering;satellite Earth Observation tools;Oils;Clutter;Radar polarimetry;Estimation;Feature extraction;Synthetic aperture radar;Shape;SAR;Oil Spill;Generalized Gamma Distribution;Radar Detection},\n  doi = {10.23919/EUSIPCO.2019.8903047},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533581.pdf},\n}\n\n
\n
\n\n\n
\n This paper tackles the oil spills offshore monitoring using satellite Earth Observation tools based on Synthetic Aperture Radar (SAR) sensors. The proposed processing scheme is based on modelling SAR sea backscattering assuming a Generalized Gamma Distribution clutter. The signal processing scheme includes a first stage to define the non-homogeneous area due to the presence of dark spots in function of the multi-scale estimations of a textural parameter defined as the inverse of the product between shape and scale sea clutter parameters. After an statistical study of this parameter, a robust value can be defined for comparison purposes. In the resulted search area, an adaptive thresholding is performed to obtain a segmented image with the oil slicks candidates contouring at pixel level. Results obtained with SAR images acquired by Sentinel-1 over Corsica, confirm the suitability of the proposed methodology.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Imaging Experiment of Multi-Pinhole Based X-Ray Fluorescence Computed Tomography Using Rat Head Phantoms.\n \n \n \n \n\n\n \n Sasaya, T.; Oouchi, T.; Yuasa, T.; Seo, S. -.; Jeon, J. -.; Kim, J. -.; Sunaguchi, N.; Hyodo, K.; and Zeniya, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-4, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ImagingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903048,\n  author = {T. Sasaya and T. Oouchi and T. Yuasa and S. -J. Seo and J. -G. Jeon and J. -K. Kim and N. Sunaguchi and K. Hyodo and T. Zeniya},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Imaging Experiment of Multi-Pinhole Based X-Ray Fluorescence Computed Tomography Using Rat Head Phantoms},\n  year = {2019},\n  pages = {1-4},\n  abstract = {Multi-pinhole based x-ray fluorescence computed tomography (mp-XFCT) delineates the spatial distribution of the non-radioactive agent in a living body by using fluorescent x-ray photons, which are emitted from the agent on de-excitation soon after extrinsic excitation and acquired with a multi-pinhole collimator and a 2-D detector. One of the potential applications is to image brain of small animals for development of treatment techniques and new drugs of brain disease in preclinical study. However, the measured photons are limited because a brain is covered with a skull which is a highly absorbing object. In this research, in order to investigate the applicability to brain imaging, we performed imaging experiments with phantoms to simulate a rat head using an actual mp-XFCT system, constructed at beamline AR-NE7A in KEK.},\n  keywords = {biomedical equipment;brain;collimators;computerised tomography;diseases;image reconstruction;medical image processing;phantoms;X-ray fluorescence analysis;brain disease;brain imaging;actual mp-XFCT system;X-ray fluorescence computed tomography;rat head phantoms;nonradioactive agent;extrinsic excitation;fluorescent X-ray photons;multipinhole collimator;image brain;2D detector;Photonics;Fluorescence;Collimators;Image reconstruction;Phantoms;Rats;X-ray fluorescence computed tomography;pinhole;brain imaging;non-radioactive agent;image reconstruction},\n  doi = {10.23919/EUSIPCO.2019.8903048},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570526389.pdf},\n}\n\n
\n
\n\n\n
\n Multi-pinhole based x-ray fluorescence computed tomography (mp-XFCT) delineates the spatial distribution of the non-radioactive agent in a living body by using fluorescent x-ray photons, which are emitted from the agent on de-excitation soon after extrinsic excitation and acquired with a multi-pinhole collimator and a 2-D detector. One of the potential applications is to image brain of small animals for development of treatment techniques and new drugs of brain disease in preclinical study. However, the measured photons are limited because a brain is covered with a skull which is a highly absorbing object. In this research, in order to investigate the applicability to brain imaging, we performed imaging experiments with phantoms to simulate a rat head using an actual mp-XFCT system, constructed at beamline AR-NE7A in KEK.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Robust Roll Angle Estimation Algorithm Based on Gradient Descent.\n \n \n \n \n\n\n \n Fan, R.; Wang, L.; Liu, M.; and Pitas, I.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903049,\n  author = {R. Fan and L. Wang and M. Liu and I. Pitas},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Robust Roll Angle Estimation Algorithm Based on Gradient Descent},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper introduces a robust roll angle estimation algorithm, which is developed from our previously published work, where the roll angle was estimated from a dense subpixel disparity map by minimizing a global energy using golden section search algorithm. In this paper, to achieve greater computational efficiency, we utilize gradient descent to optimize the aforementioned global energy. The experimental results illustrate that the presented roll angle estimation method takes fewer iterations to achieve the same precision as the previous method.},\n  keywords = {gradient methods;optimisation;search problems;stereo image processing;robust roll angle estimation algorithm;gradient descent;dense subpixel disparity map;golden section search algorithm;global energy;Signal processing algorithms;Roads;Estimation;Minimization;Europe;Real-time systems;Signal processing},\n  doi = {10.23919/EUSIPCO.2019.8903049},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533276.pdf},\n}\n\n
\n
\n\n\n
\n This paper introduces a robust roll angle estimation algorithm, which is developed from our previously published work, where the roll angle was estimated from a dense subpixel disparity map by minimizing a global energy using golden section search algorithm. In this paper, to achieve greater computational efficiency, we utilize gradient descent to optimize the aforementioned global energy. The experimental results illustrate that the presented roll angle estimation method takes fewer iterations to achieve the same precision as the previous method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint Low-Rank Factorizations with Shared and Unshared Components: Identifiability and Algorithms.\n \n \n \n \n\n\n \n Sorensen, M.; and Sidiropoulos, N. D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903050,\n  author = {M. Sorensen and N. D. Sidiropoulos},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint Low-Rank Factorizations with Shared and Unshared Components: Identifiability and Algorithms},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We study the joint low-rank factorization of the matrices X=[A B]G and Y=[A C]H, in which the columns of the shared factor matrix A correspond to vectorized rank-one matrices, the unshared factors B and C have full column rank, and the matrices G and H have full row rank. The objective is to find the shared factor A, given only X and Y. We first explain that if the matrix [A B C] has full column rank, then a basis for the column space of the shared factor matrix A can be obtained from the null space of the matrix [X Y]. This in turn implies that the problem of finding the shared factor matrix A boils down to a basic Canonical Polyadic Decomposition (CPD) problem that in many cases can directly be solved by means of an eigenvalue decomposition. Next, we explain that by taking the rank-one constraint of the columns of the shared factor matrix A into account when computing the null space of the matrix [X Y], more relaxed identifiability conditions can be obtained that do not require that [A B C] has full column rank. The benefit of the unconstrained null space approach is that it leads to simple algorithms while the benefit of the rank-one constrained null space approach is that it leads to relaxed identifiability conditions. Finally, a joint unbalanced orthogonal Procrustes and CPD fitting approach for computing the shared factor matrix A from noisy observation matrices X and Y will briefly be discussed.},\n  keywords = {eigenvalues and eigenfunctions;matrix decomposition;joint low-rank factorizations;shared factor matrix;rank-one matrices;rank-one constrained null space approach;canonical polyadic decomposition;Tensors;Matrix decomposition;Null space;Eigenvalues and eigenfunctions;Signal processing algorithms;Indexes;Signal processing;Coupled decompositions;canonical polyadic decomposition (CPD);joint low-rank tensor factorizations;joint unbalanced orthogonal Procrustes and CPD fitting;joint dimensionality reduction and CPD fitting.},\n  doi = {10.23919/EUSIPCO.2019.8903050},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533371.pdf},\n}\n\n
\n
\n\n\n
\n We study the joint low-rank factorization of the matrices X=[A B]G and Y=[A C]H, in which the columns of the shared factor matrix A correspond to vectorized rank-one matrices, the unshared factors B and C have full column rank, and the matrices G and H have full row rank. The objective is to find the shared factor A, given only X and Y. We first explain that if the matrix [A B C] has full column rank, then a basis for the column space of the shared factor matrix A can be obtained from the null space of the matrix [X Y]. This in turn implies that the problem of finding the shared factor matrix A boils down to a basic Canonical Polyadic Decomposition (CPD) problem that in many cases can directly be solved by means of an eigenvalue decomposition. Next, we explain that by taking the rank-one constraint of the columns of the shared factor matrix A into account when computing the null space of the matrix [X Y], more relaxed identifiability conditions can be obtained that do not require that [A B C] has full column rank. The benefit of the unconstrained null space approach is that it leads to simple algorithms while the benefit of the rank-one constrained null space approach is that it leads to relaxed identifiability conditions. Finally, a joint unbalanced orthogonal Procrustes and CPD fitting approach for computing the shared factor matrix A from noisy observation matrices X and Y will briefly be discussed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A New Algorithm for Dictionary Learning Based on Convex Approximation.\n \n \n \n \n\n\n \n Parsa, J.; Sadeghi, M.; Babaie-Zadeh, M.; and Jutten, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903051,\n  author = {J. Parsa and M. Sadeghi and M. Babaie-Zadeh and C. Jutten},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A New Algorithm for Dictionary Learning Based on Convex Approximation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The purpose of dictionary learning problem is to learn a dictionary D from a training data matrix Y such that Y ≈ DX and the coefficient matrix X is sparse. Many algorithms have been introduced to this aim, which minimize the representation error subject to a sparseness constraint on X. However, the dictionary learning problem is non-convex with respect to the pair (D,X). In a previous work [Sadeghi et at., 2013], a convex approximation to the non-convex term DX has been introduced which makes the whole DL problem convex. This approach can be almost applied to any existing DL algorithm and obtain better algorithms. In the current paper, it is shown that a simple modification on that approach significantly improves its performance, in terms of both accuracy and speed. Simulation results on synthetic dictionary recovery are provided to confirm this claim.},\n  keywords = {approximation theory;signal processing;sparse matrices;training data matrix;coefficient matrix;sparseness constraint;dictionary learning problem;convex approximation;nonconvex term DX;DL algorithm;synthetic dictionary recovery;Signal processing algorithms;Dictionaries;Machine learning;Root mean square;Convergence;Europe;Signal processing;Compressed sensing;sparse coding;convex approximation;convergence rate;dictionary learning},\n  doi = {10.23919/EUSIPCO.2019.8903051},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531928.pdf},\n}\n\n
\n
\n\n\n
\n The purpose of dictionary learning problem is to learn a dictionary D from a training data matrix Y such that Y ≈ DX and the coefficient matrix X is sparse. Many algorithms have been introduced to this aim, which minimize the representation error subject to a sparseness constraint on X. However, the dictionary learning problem is non-convex with respect to the pair (D,X). In a previous work [Sadeghi et at., 2013], a convex approximation to the non-convex term DX has been introduced which makes the whole DL problem convex. This approach can be almost applied to any existing DL algorithm and obtain better algorithms. In the current paper, it is shown that a simple modification on that approach significantly improves its performance, in terms of both accuracy and speed. Simulation results on synthetic dictionary recovery are provided to confirm this claim.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Deep Complex Neural Network Learning for High-Voltage Insulation Fault Classification from Complex Bispectrum Representation.\n \n \n \n \n\n\n \n Mitiche, I.; Jenkins, M. D.; Boreham, P.; Nesbitt, A.; and Morison, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DeepPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903052,\n  author = {I. Mitiche and M. D. Jenkins and P. Boreham and A. Nesbitt and G. Morison},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Deep Complex Neural Network Learning for High-Voltage Insulation Fault Classification from Complex Bispectrum Representation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Bispectrum representations previously achieved a successful classification of insulation fault signals in High-Voltage (HV) power plant. The magnitude information of the Bispectrum was implemented as a feature for a Deep Neural Network. This preliminary research brought interest in evaluating the performance of Bispectrum as complex input features that are implemented into a Deep Complex Valued Convolutional Neural Network (CV-CNN). This paper presents the application of this novel method to condition monitoring of High Voltage (HV) power plant equipment. Discharge signals related to HV insulation faults are measured in a real-world power plant using the Electromagnetic Interference (EMI) method and processed using third order Higher-Order Statistics (HOS) to obtain a Bispectrum representation. By mapping the time-domain signal to Bispectrum representations the problem can be approached as a complex-valued classification task. This allows for the novel combination of complex Bispectrum and CV-CNN applied to the classification of HV discharge signals. The network is trained on signals from 9 classes and achieves high classification accuracy in each category, improving upon the performance of a Real Valued CNN (RV-CNN).},\n  keywords = {condition monitoring;convolutional neural nets;electromagnetic interference;fault diagnosis;feature extraction;filtering theory;insulation;learning (artificial intelligence);power engineering computing;signal classification;signal representation;Deep Complex Neural Network learning;High-Voltage insulation fault classification;Complex Bispectrum representation;successful classification;insulation fault signals;High-Voltage power plant;Deep Neural Network;complex input features;Deep Complex Valued Convolutional Neural Network;High Voltage power plant equipment;HV insulation faults;real-world power plant;Electromagnetic Interference method;order Higher-Order Statistics;time-domain signal;complex-valued classification task;HV discharge signals;high classification accuracy;Electromagnetic interference;Partial discharges;Two dimensional displays;Neural networks;Insulation;Convolution;Mathematical model},\n  doi = {10.23919/EUSIPCO.2019.8903052},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534094.pdf},\n}\n\n
\n
\n\n\n
\n Bispectrum representations previously achieved a successful classification of insulation fault signals in High-Voltage (HV) power plant. The magnitude information of the Bispectrum was implemented as a feature for a Deep Neural Network. This preliminary research brought interest in evaluating the performance of Bispectrum as complex input features that are implemented into a Deep Complex Valued Convolutional Neural Network (CV-CNN). This paper presents the application of this novel method to condition monitoring of High Voltage (HV) power plant equipment. Discharge signals related to HV insulation faults are measured in a real-world power plant using the Electromagnetic Interference (EMI) method and processed using third order Higher-Order Statistics (HOS) to obtain a Bispectrum representation. By mapping the time-domain signal to Bispectrum representations the problem can be approached as a complex-valued classification task. This allows for the novel combination of complex Bispectrum and CV-CNN applied to the classification of HV discharge signals. The network is trained on signals from 9 classes and achieves high classification accuracy in each category, improving upon the performance of a Real Valued CNN (RV-CNN).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A novel resynchronization procedure for hand-lips fusion applied to continuous French Cued Speech recognition.\n \n \n \n \n\n\n \n Liu, L.; Feng, G.; Beautemps, D.; and Zhang, X. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903053,\n  author = {L. Liu and G. Feng and D. Beautemps and X. -P. Zhang},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A novel resynchronization procedure for hand-lips fusion applied to continuous French Cued Speech recognition},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Cued Speech (CS) is an augmented lip reading with the help of hand coding. Due to lips and hand movements are asynchronous and a direct fusion of these asynchronous features may reduce the efficiency of the recognition, the fusion of them in automatic CS recognition is a challenging problem. In our previous work, we built a hand preceding model for hand positions (vowels) by investigating the temporal organization of hand movements in French CS. In this work, we investigate a suitable value of the hand preceding time for consonants by analyzing the temporal movements of hand shapes in French CS. Then, based on these two results, we propose an efficient resynchronization procedure for the fusion of multi-stream features in CS. This procedure is applied to the continuous CS phoneme recognition based on the multi-stream CNN-HMMs architecture. The result shows that using this procedure brings an improvement of about 4.6% in the phoneme recognition correctness, compared with the state-of-the-art, which does not take into account the asynchrony of multi-modalities.},\n  keywords = {convolutional neural nets;gesture recognition;hidden Markov models;speech recognition;synchronisation;resynchronization procedure;hand-lips fusion;augmented lip reading;hand coding;hand movements;asynchronous features;automatic CS recognition;hand preceding model;temporal organization;French CS;continuous CS phoneme recognition;continuous French cued speech recognition;multistream CNN-HMMs architecture;Lips;Shape;Feature extraction;Phonetics;Europe;Signal processing;Task analysis;Cued Speech;multi-modal fusion;hand preceding time;resynchronization procedure;CNN-HMMs},\n  doi = {10.23919/EUSIPCO.2019.8903053},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533997.pdf},\n}\n\n
\n
\n\n\n
\n Cued Speech (CS) is an augmented lip reading with the help of hand coding. Due to lips and hand movements are asynchronous and a direct fusion of these asynchronous features may reduce the efficiency of the recognition, the fusion of them in automatic CS recognition is a challenging problem. In our previous work, we built a hand preceding model for hand positions (vowels) by investigating the temporal organization of hand movements in French CS. In this work, we investigate a suitable value of the hand preceding time for consonants by analyzing the temporal movements of hand shapes in French CS. Then, based on these two results, we propose an efficient resynchronization procedure for the fusion of multi-stream features in CS. This procedure is applied to the continuous CS phoneme recognition based on the multi-stream CNN-HMMs architecture. The result shows that using this procedure brings an improvement of about 4.6% in the phoneme recognition correctness, compared with the state-of-the-art, which does not take into account the asynchrony of multi-modalities.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Generalized Multichannel Variational Autoencoder for Underdetermined Source Separation.\n \n \n \n \n\n\n \n Seki, S.; Kameoka, H.; Li, L.; Toda, T.; and Takeda, K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"GeneralizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903054,\n  author = {S. Seki and H. Kameoka and L. Li and T. Toda and K. Takeda},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Generalized Multichannel Variational Autoencoder for Underdetermined Source Separation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper deals with a multichannel audio source separation problem under underdetermined conditions. Multi-channel Non-negative Matrix Factorization (MNMF) is one of the powerful approaches, which adopts the NMF concept for source power spectrogram modeling. It works reasonably well for particular types of sound sources, however, one limitation is that it can fail to work for sources with spectrograms that do not comply with the NMF model. To address this limitation, a novel technique called the Multichannel Variational Autoencoder (MVAE) method was recently proposed, where a Conditional VAE (CVAE) is used instead of the NMF model for source power spectrogram modeling. This approach has shown to perform impressively in determined source separation tasks thanks to the representation power of DNNs. This paper generalizes MVAE originally formulated under determined mixing conditions so that it can also deal with underdetermined cases. The proposed method was evaluated on an underdetermined source separation task of separating out three sources from two microphone inputs. Experimental results revealed that the generalized MVAE method achieved better performance than the conventional MNMF method.},\n  keywords = {audio signal processing;blind source separation;matrix decomposition;microphones;neural nets;source separation;NMF model;Multichannel Variational Autoencoder method;source power spectrogram modeling;determined source separation tasks thanks;determined mixing conditions;underdetermined cases;underdetermined source separation task;generalized MVAE method;generalized Multichannel Variational Autoencoder;multichannel audio source separation problem;underdetermined conditions;Multichannel Nonnegative Matrix Factorization;powerful approaches;NMF concept;sound sources;Spectrogram;Source separation;Decoding;Task analysis;Microphones;Time-frequency analysis;Mathematical model;Underdetermined source separation;Variational audoencoder;Non-negative matrix factorization},\n  doi = {10.23919/EUSIPCO.2019.8903054},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533422.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with a multichannel audio source separation problem under underdetermined conditions. Multi-channel Non-negative Matrix Factorization (MNMF) is one of the powerful approaches, which adopts the NMF concept for source power spectrogram modeling. It works reasonably well for particular types of sound sources, however, one limitation is that it can fail to work for sources with spectrograms that do not comply with the NMF model. To address this limitation, a novel technique called the Multichannel Variational Autoencoder (MVAE) method was recently proposed, where a Conditional VAE (CVAE) is used instead of the NMF model for source power spectrogram modeling. This approach has shown to perform impressively in determined source separation tasks thanks to the representation power of DNNs. This paper generalizes MVAE originally formulated under determined mixing conditions so that it can also deal with underdetermined cases. The proposed method was evaluated on an underdetermined source separation task of separating out three sources from two microphone inputs. Experimental results revealed that the generalized MVAE method achieved better performance than the conventional MNMF method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectral Visibility Graphs: Application to Similarity of Harmonic Signals.\n \n \n \n \n\n\n \n Yela, D. F.; Stowell, D.; and Sandler, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SpectralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903055,\n  author = {D. F. Yela and D. Stowell and M. Sandler},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Spectral Visibility Graphs: Application to Similarity of Harmonic Signals},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Graph theory is emerging as a new source of tools for time series analysis. One promising method is to transform a signal into its visibility graph, a representation which captures many interesting aspects of the signal. Here we introduce the visibility graph for audio spectra and propose a novel representation for audio analysis: the spectral visibility graph degree. Such representation inherently captures the harmonic content of the signal whilst being resilient to broadband noise. We present experiments demonstrating its utility to measure robust similarity between harmonic signals in real and synthesised audio data. The source code is available online.},\n  keywords = {acoustic signal processing;audio signal processing;graph theory;musical acoustics;musical instruments;time series;spectral visibility graphs;harmonic signals;graph theory;time series analysis;audio spectra;audio analysis;Harmonic analysis;Time series analysis;Broadband communication;Spectrogram;Task analysis;Tools;Measurement},\n  doi = {10.23919/EUSIPCO.2019.8903055},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533774.pdf},\n}\n\n
\n
\n\n\n
\n Graph theory is emerging as a new source of tools for time series analysis. One promising method is to transform a signal into its visibility graph, a representation which captures many interesting aspects of the signal. Here we introduce the visibility graph for audio spectra and propose a novel representation for audio analysis: the spectral visibility graph degree. Such representation inherently captures the harmonic content of the signal whilst being resilient to broadband noise. We present experiments demonstrating its utility to measure robust similarity between harmonic signals in real and synthesised audio data. The source code is available online.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reversible Privacy Preservation using Multi-level Encryption and Compressive Sensing.\n \n \n \n \n\n\n \n Yamaç, M.; Ahishali, M.; Passalis, N.; Raitoharju, J.; Sankur, B.; and Gabbouj, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ReversiblePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903056,\n  author = {M. Yamaç and M. Ahishali and N. Passalis and J. Raitoharju and B. Sankur and M. Gabbouj},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Reversible Privacy Preservation using Multi-level Encryption and Compressive Sensing},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Security monitoring via ubiquitous cameras and their more extended in intelligent buildings stand to gain from advances in signal processing and machine learning. While these innovative and ground-breaking applications can be considered as a boon, at the same time they raise significant privacy concerns. In fact, recent GDPR (General Data Protection Regulation) legislation has highlighted and become an incentive for privacy-preserving solutions. Typical privacy-preserving video monitoring schemes address these concerns by either anonymizing the sensitive data. However, these approaches suffer from some limitations, since they are usually non-reversible, do not provide multiple levels of decryption and computationally costly. In this paper, we provide a novel privacy-preserving method, which is reversible, supports de-identification at multiple privacy levels, and can efficiently perform data acquisition, encryption and data hiding by combining multi-level encryption with compressive sensing. The effectiveness of the proposed approach in protecting the identity of the users has been validated using the goodness of reconstruction quality and strong anonymization of the faces.},\n  keywords = {compressed sensing;cryptography;data acquisition;data encapsulation;data privacy;data protection;legislation;data hiding;multilevel encryption;compressive sensing;reversible privacy preservation;security monitoring;ubiquitous cameras;signal processing;machine learning;General Data Protection Regulation;privacy-preserving solutions;privacy-preserving video monitoring schemes;privacy-preserving method;Encryption;Privacy;Compressed sensing;Monitoring;Image reconstruction;Sparse matrices;Reversible Privacy Preservation;Multi-level Encryption;Compressive Sensing;Video Monitoring},\n  doi = {10.23919/EUSIPCO.2019.8903056},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533702.pdf},\n}\n\n
\n
\n\n\n
\n Security monitoring via ubiquitous cameras and their more extended in intelligent buildings stand to gain from advances in signal processing and machine learning. While these innovative and ground-breaking applications can be considered as a boon, at the same time they raise significant privacy concerns. In fact, recent GDPR (General Data Protection Regulation) legislation has highlighted and become an incentive for privacy-preserving solutions. Typical privacy-preserving video monitoring schemes address these concerns by either anonymizing the sensitive data. However, these approaches suffer from some limitations, since they are usually non-reversible, do not provide multiple levels of decryption and computationally costly. In this paper, we provide a novel privacy-preserving method, which is reversible, supports de-identification at multiple privacy levels, and can efficiently perform data acquisition, encryption and data hiding by combining multi-level encryption with compressive sensing. The effectiveness of the proposed approach in protecting the identity of the users has been validated using the goodness of reconstruction quality and strong anonymization of the faces.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sequential Peak Detection for Flow Cytometry.\n \n \n \n \n\n\n \n Gül, G.; Alebrand, S.; Baßler, M.; and Wittek, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SequentialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903057,\n  author = {G. Gül and S. Alebrand and M. Baßler and J. Wittek},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Sequential Peak Detection for Flow Cytometry},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Circulating tumor cells in blood are identified by means of sequential peak detection taking into account the memory and real time applicability constraints. Three different spatial domain algorithms: derivative approach, energy detector and baseline method are compared with three different peak detection algorithms based on machine learning: linear and nonlinear support vector machines and artificial neural networks. Performance of the peak detection algorithms are tested on both synthetic and real data. Experimental results indicate superiority of machine learning algorithms over the other three algorithms which are widely used in practice. Due to Gaussianity assumption in the signal model, a linear support vector machine is found to be as good as other machine learning schemes.},\n  keywords = {blood;learning (artificial intelligence);medical computing;neural nets;support vector machines;tumours;flow cytometry;tumor cells;sequential peak detection;energy detector;nonlinear support vector machines;machine learning schemes;spatial domain algorithms;peak detection algorithms;artificial neural networks;Real-time systems;Support vector machines;Detection algorithms;Machine learning algorithms;Detectors;Training;Testing;Peak detection;flow cytometry;machine learning;classification;filtering;field programmable gate array},\n  doi = {10.23919/EUSIPCO.2019.8903057},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533138.pdf},\n}\n\n
\n
\n\n\n
\n Circulating tumor cells in blood are identified by means of sequential peak detection taking into account the memory and real time applicability constraints. Three different spatial domain algorithms: derivative approach, energy detector and baseline method are compared with three different peak detection algorithms based on machine learning: linear and nonlinear support vector machines and artificial neural networks. Performance of the peak detection algorithms are tested on both synthetic and real data. Experimental results indicate superiority of machine learning algorithms over the other three algorithms which are widely used in practice. Due to Gaussianity assumption in the signal model, a linear support vector machine is found to be as good as other machine learning schemes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improved Regularized Reconstruction for Simultaneous Multi-Slice Cardiac MRI T1 Mapping.\n \n \n \n \n\n\n \n Demirel, Ö. B.; Weingärtner, S.; Moeller, S.; and Akçakaya, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903058,\n  author = {Ö. B. Demirel and S. Weingärtner and S. Moeller and M. Akçakaya},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Improved Regularized Reconstruction for Simultaneous Multi-Slice Cardiac MRI T1 Mapping},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Myocardial T1 mapping is a quantitative MRI technique that has found great clinical utility in the detection of various heart disease. These acquisitions typically require three breath-holds, leading to long scan durations and patient discomfort. Simultaneous multi-slice (SMS) imaging has been shown to reduce the scan time of myocardial T1 mapping to a single breath-hold without sacrificing coverage, albeit at reduced precision. In this work, we propose a new reconstruction strategy for SMS imaging that combines the advantages of two different k-space interpolation strategies, while allowing for regularization, in order to improve the precision of accelerated myocardial T1 mapping.},\n  keywords = {biomedical MRI;cardiology;diseases;image reconstruction;interpolation;medical image processing;quantitative MRI technique;great clinical utility;heart disease;breath-holds;long scan durations;patient discomfort;simultaneous multislice imaging;single breath-hold;reduced precision;k-space interpolation strategies;simultaneous multislice cardiac MRI T1 Mapping;accelerated myocardial T1 mapping;Image reconstruction;Kernel;Myocardium;Magnetic resonance imaging;Acceleration;Linear programming;magnetic resonance imaging;parallel imaging;accelerated MRI},\n  doi = {10.23919/EUSIPCO.2019.8903058},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533888.pdf},\n}\n\n
\n
\n\n\n
\n Myocardial T1 mapping is a quantitative MRI technique that has found great clinical utility in the detection of various heart disease. These acquisitions typically require three breath-holds, leading to long scan durations and patient discomfort. Simultaneous multi-slice (SMS) imaging has been shown to reduce the scan time of myocardial T1 mapping to a single breath-hold without sacrificing coverage, albeit at reduced precision. In this work, we propose a new reconstruction strategy for SMS imaging that combines the advantages of two different k-space interpolation strategies, while allowing for regularization, in order to improve the precision of accelerated myocardial T1 mapping.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Semidefinite Programming for MIMO Radar Target Localization using Bistatic Range Measurements.\n \n \n \n \n\n\n \n Wang, H.; Zhang, B.; Zheng, L.; and Wu, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SemidefinitePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903059,\n  author = {H. Wang and B. Zhang and L. Zheng and J. Wu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Semidefinite Programming for MIMO Radar Target Localization using Bistatic Range Measurements},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we investigates the target localization problem based on bistatic range measurements in multiple input multiple output (MIMO) radar system with widely separated antennas. Under the assumption of uncorrelated Gaussian distributed measurement noises, The maximum likelihood estimator (MLE) is derived for this problem, which is highly nonconvex and difficult to solve. Weighted least squares (WLS) and Semidefinite programming (SDP) are two research directions for solving this problem. However, existing studies can not provide a high-quality solution over a large range of measurements noise. In this work, we propose to add a penalty term to improve the performance of the original SDP method. We further address the issue of robust localization in the case of non-accurate transmitter/receiver position. The corresponding Cramér Rao lower bound (CRLB) is also derived. Simulation results show the superiority of our proposed methods by comparing with other exiting algorithms and CRLB.},\n  keywords = {Gaussian distribution;least squares approximations;maximum likelihood estimation;MIMO radar;optimisation;penalty term;non-accurate transmitter/receiver position;uncorrelated Gaussian distributed measurement noises;maximum likelihood estimator;multiple input multiple output radar system;target localization problem;bistatic range measurements;MIMO radar target localization;Cramér Rao lower bound;robust localization;original SDP method;high-quality solution;Semidefinite programming;weighted least squares;Manganese;Radar antennas;Antenna measurements;MIMO radar;Receiving antennas;Multiple-inputmultiple-output (MIMO) radar;Target localization;Range measurements;Semidefinite programming (SDP);Cramér-Rao Lower Bound (CRLB)},\n  doi = {10.23919/EUSIPCO.2019.8903059},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529293.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we investigates the target localization problem based on bistatic range measurements in multiple input multiple output (MIMO) radar system with widely separated antennas. Under the assumption of uncorrelated Gaussian distributed measurement noises, The maximum likelihood estimator (MLE) is derived for this problem, which is highly nonconvex and difficult to solve. Weighted least squares (WLS) and Semidefinite programming (SDP) are two research directions for solving this problem. However, existing studies can not provide a high-quality solution over a large range of measurements noise. In this work, we propose to add a penalty term to improve the performance of the original SDP method. We further address the issue of robust localization in the case of non-accurate transmitter/receiver position. The corresponding Cramér Rao lower bound (CRLB) is also derived. Simulation results show the superiority of our proposed methods by comparing with other exiting algorithms and CRLB.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Segmentation of Piecewise ARX Processes by Exploiting Sparsity in Tight-Dimensional Spaces.\n \n \n \n \n\n\n \n Kuroda, H.; Yamagishi, M.; and Yamada, I.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SegmentationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903060,\n  author = {H. Kuroda and M. Yamagishi and I. Yamada},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Segmentation of Piecewise ARX Processes by Exploiting Sparsity in Tight-Dimensional Spaces},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Segmentation of piecewise Auto-Regressive eXogenous (ARX) processes has been a major challenge in time-series segmentation and change detection. In this paper, for piecewise ARX process segmentation, we exploit hidden sparsity in tight-dimensional representation spaces. More precisely, we strategically design a tight-dimensional linear transformation which reveals sparsity hidden in samples following piecewise ARX processes. Experiments on synthetic and real-world data demonstrate the effectiveness of the proposed method.},\n  keywords = {autoregressive processes;regression analysis;time series;time-series segmentation;piecewise ARX process segmentation;tight-dimensional representation spaces;tight-dimensional linear transformation;piecewise auto-regressive exogenous processes;change detection;Estimation;Europe;Signal processing;Indexes;Brain modeling;Computational efficiency;Convex functions;Time-series segmentation;change detection;piecewise ARX process;sparse representation},\n  doi = {10.23919/EUSIPCO.2019.8903060},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534030.pdf},\n}\n\n
\n
\n\n\n
\n Segmentation of piecewise Auto-Regressive eXogenous (ARX) processes has been a major challenge in time-series segmentation and change detection. In this paper, for piecewise ARX process segmentation, we exploit hidden sparsity in tight-dimensional representation spaces. More precisely, we strategically design a tight-dimensional linear transformation which reveals sparsity hidden in samples following piecewise ARX processes. Experiments on synthetic and real-world data demonstrate the effectiveness of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Outlier Detection from Non-Smooth Sensor Data.\n \n \n \n \n\n\n \n Huuhtanen, T.; Ambos, H.; and Jung, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OutlierPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903061,\n  author = {T. Huuhtanen and H. Ambos and A. Jung},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Outlier Detection from Non-Smooth Sensor Data},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Outlier detection is usually based on smooth assumption of the data. Most existing approaches for outlier detection from spatial sensor data assume the data to be a smooth function of the location. Spatial discontinuities in the data, such as arising from shadows in photovoltaic (PV) systems, may cause outlier detection methods based on the spatial smoothness assumption to fail. In this paper, we propose novel approaches for outlier detection of non-smooth spatial data. The methods are evaluated by numerical experiments involving PV panel measurements as well as synthetic data.},\n  keywords = {data mining;photovoltaic power systems;power engineering computing;smoothing methods;nonsmooth sensor data;spatial sensor data;smooth function;outlier detection methods;spatial smoothness assumption;nonsmooth spatial data;synthetic data;photovoltaic systems;PV systems;PV panel measurements;Image edge detection;Anomaly detection;Spatial databases;Signal processing algorithms;Power measurement;Maximum likelihood estimation;Prediction algorithms;outlier detection;spatial signals},\n  doi = {10.23919/EUSIPCO.2019.8903061},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529535.pdf},\n}\n\n
\n
\n\n\n
\n Outlier detection is usually based on smooth assumption of the data. Most existing approaches for outlier detection from spatial sensor data assume the data to be a smooth function of the location. Spatial discontinuities in the data, such as arising from shadows in photovoltaic (PV) systems, may cause outlier detection methods based on the spatial smoothness assumption to fail. In this paper, we propose novel approaches for outlier detection of non-smooth spatial data. The methods are evaluated by numerical experiments involving PV panel measurements as well as synthetic data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast Surface Detection in Single-Photon Lidar Waveforms.\n \n \n \n \n\n\n \n Tachella, J.; Altmann, Y.; McLaughlin, S.; and Tourneret, J. . -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903062,\n  author = {J. Tachella and Y. Altmann and S. McLaughlin and J. . -Y. Tourneret},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast Surface Detection in Single-Photon Lidar Waveforms},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Single-photon light detection and ranging (Lidar) devices can be used to obtain range and reflectivity information from 3D scenes. However, reconstructing the 3D surfaces from the raw waveforms can be very challenging, in particular when the number of spurious background detections is large compared to the number of signal detections. This paper introduces a new and fast detection algorithm, which can be used to assess the presence of objects/surfaces in each waveform, allowing only the histograms where the imaged surfaces are present to be further processed. The method is compared to state-of-the-art 3D reconstruction methods using synthetic and real single-photon data and the results illustrate its benefits for fast and robust target detection using single-photon data.},\n  keywords = {image reconstruction;object detection;optical radar;radar detection;radar imaging;fast surface detection;single-photon data detection;3D surface reconstruction methods;single-photon light detection and ranging devices;single-photon LIDAR waveforms;robust target detection algorithm;signal detections;spurious background detections;reflectivity information;Photonics;Detectors;Laser radar;Histograms;Object detection;Bayes methods;Imaging;Bayesian statistics;inverse problems;Lidar;detection;low-photon imaging and sensing},\n  doi = {10.23919/EUSIPCO.2019.8903062},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533978.pdf},\n}\n\n
\n
\n\n\n
\n Single-photon light detection and ranging (Lidar) devices can be used to obtain range and reflectivity information from 3D scenes. However, reconstructing the 3D surfaces from the raw waveforms can be very challenging, in particular when the number of spurious background detections is large compared to the number of signal detections. This paper introduces a new and fast detection algorithm, which can be used to assess the presence of objects/surfaces in each waveform, allowing only the histograms where the imaged surfaces are present to be further processed. The method is compared to state-of-the-art 3D reconstruction methods using synthetic and real single-photon data and the results illustrate its benefits for fast and robust target detection using single-photon data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectral Coexistence of 5G Networks and Satellite Communication Systems Enabled by Coordinated Caching and QoS-Aware Resource Allocation.\n \n \n \n \n\n\n \n Ntougias, K.; Papadias, C. B.; Papageorgiou, G. K.; and Hasslinger, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SpectralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903063,\n  author = {K. Ntougias and C. B. Papadias and G. K. Papageorgiou and G. Hasslinger},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Spectral Coexistence of 5G Networks and Satellite Communication Systems Enabled by Coordinated Caching and QoS-Aware Resource Allocation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The combination of underlay spectrum sharing and coordinated multi-point technologies promises substantial spectral efficiency (SE) gains for 5G cellular networks. In this work, we present a family of simple cooperative mobile edge caching strategies that create joint transmission (JT) opportunities and make use of the non-computational-demanding score-gated least recently used (SG-LRU) caching scheme to achieve high cache hit rate, thus increasing the sum-SE and reducing both the backhaul traffic and the content access latency. In addition, we derive a low-complexity coordinated quality-of-service (QoS) aware resource allocation scheme that maximizes the sum-SE of coordinated beamforming (CBF) under given transmission power per base station, inter-system interference power, and per-user QoS constraints as well as simple alternatives for both CBF and JT. We consider a use case where a cellular network coexists with a fixed satellite service earth station in the 3.7-4.2 GHz C-band. Numerical simulations illustrate the performance gains of the proposed coordinated caching and resource allocation strategies and shed light on the impact of various parameters on their efficiency.},\n  keywords = {5G mobile communication;array signal processing;cache storage;cellular radio;mobile satellite communication;quality of service;radio spectrum management;radiofrequency interference;resource allocation;telecommunication traffic;spectral coexistence;satellite communication systems enabled;coordinated caching;QoS-aware resource allocation;underlay spectrum sharing;multipoint technologies;substantial spectral efficiency;5G cellular networks;simple cooperative mobile edge caching strategies;joint transmission opportunities;JT;score-gated least recently used caching scheme;SG-LRU;high cache hit rate;sum-SE;quality-of-service aware resource allocation scheme;coordinated beamforming;CBF;given transmission power;inter-system interference power;per-user QoS constraints;simple alternatives;cellular network coexists;fixed satellite service earth station;resource allocation strategies;frequency 3.7 GHz to 4.2 GHz;Quality of service;Interference;Resource management;Cellular networks;Precoding;5G mobile communication;Europe;Coordinated Multi-Point (CoMP);cooperative content caching redundancy enhancement (C3RE);score-gated least-recently used (SG-LRU);coordinated QoS-aware interference-constrained PA (CQA-ICPA);interference-constrained equal power allocation (ICEPA)},\n  doi = {10.23919/EUSIPCO.2019.8903063},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533947.pdf},\n}\n\n
\n
\n\n\n
\n The combination of underlay spectrum sharing and coordinated multi-point technologies promises substantial spectral efficiency (SE) gains for 5G cellular networks. In this work, we present a family of simple cooperative mobile edge caching strategies that create joint transmission (JT) opportunities and make use of the non-computational-demanding score-gated least recently used (SG-LRU) caching scheme to achieve high cache hit rate, thus increasing the sum-SE and reducing both the backhaul traffic and the content access latency. In addition, we derive a low-complexity coordinated quality-of-service (QoS) aware resource allocation scheme that maximizes the sum-SE of coordinated beamforming (CBF) under given transmission power per base station, inter-system interference power, and per-user QoS constraints as well as simple alternatives for both CBF and JT. We consider a use case where a cellular network coexists with a fixed satellite service earth station in the 3.7-4.2 GHz C-band. Numerical simulations illustrate the performance gains of the proposed coordinated caching and resource allocation strategies and shed light on the impact of various parameters on their efficiency.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Source Enumeration in Non-White Noise and Small Sample Size via Subspace Averaging.\n \n \n \n \n\n\n \n Garg, V.; and Santamaria, I.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SourcePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903064,\n  author = {V. Garg and I. Santamaria},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Source Enumeration in Non-White Noise and Small Sample Size via Subspace Averaging},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper addresses the problem of source enumeration by an array of sensors in the challenging conditions of: i) large uniform arrays with few snapshots, and ii) non-white or spatially correlated noises with arbitrary correlation. To solve this problem, we combine a subspace averaging (SA) technique, recently proposed for the case of independent and identically distributed (i.i.d.) noises, with a majority vote approach. The number of sources is detected for increasing dimensions of the SA technique and then a majority vote is applied to determine the final estimate. As illustrated by some simulation examples, this simple modification makes SA a very robust method of enumerating sources in these challenging scenarios.},\n  keywords = {array signal processing;correlation methods;statistical analysis;white noise;source enumeration;spatially correlated noises;subspace averaging technique;majority vote approach;nonwhite noise;small sample size;independent and identically distributed noises;SA technique;Covariance matrices;Array signal processing;Sensor arrays;Correlation;Estimation;Eigenvalues and eigenfunctions;Array processing;model order estimation;source enumeration;subspace averaging},\n  doi = {10.23919/EUSIPCO.2019.8903064},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570524628.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the problem of source enumeration by an array of sensors in the challenging conditions of: i) large uniform arrays with few snapshots, and ii) non-white or spatially correlated noises with arbitrary correlation. To solve this problem, we combine a subspace averaging (SA) technique, recently proposed for the case of independent and identically distributed (i.i.d.) noises, with a majority vote approach. The number of sources is detected for increasing dimensions of the SA technique and then a majority vote is applied to determine the final estimate. As illustrated by some simulation examples, this simple modification makes SA a very robust method of enumerating sources in these challenging scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust ToA-Based Localization in a Mixed LOS/NLOS Environment Using Hybrid Mapping Technique.\n \n \n \n \n\n\n \n Al-Samahi, S. S. A.; Ho, K. C.; and Islam, N. E.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903065,\n  author = {S. S. A. Al-Samahi and K. C. Ho and N. E. Islam},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust ToA-Based Localization in a Mixed LOS/NLOS Environment Using Hybrid Mapping Technique},\n  year = {2019},\n  pages = {1-5},\n  abstract = {A two-stage hybrid method based on the machine learning approach is proposed for source localization using time of arrival (ToA) measurements in a mixed line of sight (LOS) and non-line of sight (NLOS) environment. The first stage applies an artificial neural network (NN) to detect the NLOS measurements that are outliers and the second stage passes the identified LOS measurements to an inverse weighted self-organizing network (IWSON) for determining the source location. The NN NLOS detector is able to take care of a variable number of NLOS measurements while the IWSON handles naturally a variable number of inputs and yields a solution without explicitly solving the nonlinear estimation problem. Simulations validate the good performance of the system with a different number of NLOS measurements. It provides a solution in reaching the Cramer-Rao lower bound (CRLB) accuracy under a harsh multipath noisy environment, except over the small error region where it can act as an initialization for the iterative MLE to refine accuracy if necessary.},\n  keywords = {iterative methods;learning (artificial intelligence);maximum likelihood estimation;neural nets;nonlinear estimation;radiocommunication;telecommunication computing;time-of-arrival estimation;artificial neural network;NLOS measurements;identified LOS measurements;inverse weighted self-organizing network;IWSON;NN NLOS detector;harsh multipath noisy environment;robust ToA-based localization;hybrid mapping technique;two-stage hybrid method;machine learning approach;source localization;arrival measurements;mixed line;sight environment;Artificial neural networks;Nonlinear optics;Position measurement;Signal processing algorithms;Training;Feature extraction;Anomaly detection;ToA;localization;neural network;outlier;correct detection;false alarm},\n  doi = {10.23919/EUSIPCO.2019.8903065},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528857.pdf},\n}\n\n
\n
\n\n\n
\n A two-stage hybrid method based on the machine learning approach is proposed for source localization using time of arrival (ToA) measurements in a mixed line of sight (LOS) and non-line of sight (NLOS) environment. The first stage applies an artificial neural network (NN) to detect the NLOS measurements that are outliers and the second stage passes the identified LOS measurements to an inverse weighted self-organizing network (IWSON) for determining the source location. The NN NLOS detector is able to take care of a variable number of NLOS measurements while the IWSON handles naturally a variable number of inputs and yields a solution without explicitly solving the nonlinear estimation problem. Simulations validate the good performance of the system with a different number of NLOS measurements. It provides a solution in reaching the Cramer-Rao lower bound (CRLB) accuracy under a harsh multipath noisy environment, except over the small error region where it can act as an initialization for the iterative MLE to refine accuracy if necessary.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Concatenated Identical DNN (CI-DNN) to Reduce Noise-Type Dependence in DNN-Based Speech Enhancement.\n \n \n \n \n\n\n \n Xu, Z.; Strake, M.; and Fingscheidt, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ConcatenatedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903066,\n  author = {Z. Xu and M. Strake and T. Fingscheidt},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Concatenated Identical DNN (CI-DNN) to Reduce Noise-Type Dependence in DNN-Based Speech Enhancement},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Estimating time-frequency domain masks for speech enhancement using deep learning approaches has recently become a popular field of research. In this paper, we propose a mask-based speech enhancement framework by using concatenated identical deep neural networks (CI-DNNs). The idea is that a single DNN is trained under multiple input and output signal-to-noise power ratio (SNR) conditions, using targets that provide a moderate SNR gain with respect to the input and therefore achieve a balance between speech component quality and noise suppression. We concatenate this single DNN several times without any retraining to provide enough noise attenuation. Simulation results show that our proposed CI-DNN outperforms enhancement methods using classical spectral weighting rules w.r.t. total speech quality and speech intelligibility. Moreover, our approach shows similar or even a little bit better performance with much fewer trainable parameters compared with a noisy-target single DNN approach of the same size. A comparison to the conventional clean-target single DNN approach shows that our proposed CI-DNN is better in speech component quality and much better in residual noise component quality. Most importantly, our new CI-DNN generalized best to an unseen noise type, if compared to the other tested deep learning approaches.},\n  keywords = {convolutional neural nets;learning (artificial intelligence);speech enhancement;speech intelligibility;noisy-target single DNN approach;clean-target single DNN approach;CI-DNN;speech component quality;residual noise component quality;deep learning approaches;noise-type dependence;DNN-based speech enhancement;mask-based speech enhancement framework;concatenated identical deep neural networks;output signal-to-noise power ratio;noise suppression;noise attenuation;total speech quality;speech intelligibility;time-frequency domain masks;Speech enhancement;Signal to noise ratio;Training;Noise measurement;Task analysis;Discrete Fourier transforms;Neural networks;Speech enhancement;noise reduction;DNN;noisy speech target},\n  doi = {10.23919/EUSIPCO.2019.8903066},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528366.pdf},\n}\n\n
\n
\n\n\n
\n Estimating time-frequency domain masks for speech enhancement using deep learning approaches has recently become a popular field of research. In this paper, we propose a mask-based speech enhancement framework by using concatenated identical deep neural networks (CI-DNNs). The idea is that a single DNN is trained under multiple input and output signal-to-noise power ratio (SNR) conditions, using targets that provide a moderate SNR gain with respect to the input and therefore achieve a balance between speech component quality and noise suppression. We concatenate this single DNN several times without any retraining to provide enough noise attenuation. Simulation results show that our proposed CI-DNN outperforms enhancement methods using classical spectral weighting rules w.r.t. total speech quality and speech intelligibility. Moreover, our approach shows similar or even a little bit better performance with much fewer trainable parameters compared with a noisy-target single DNN approach of the same size. A comparison to the conventional clean-target single DNN approach shows that our proposed CI-DNN is better in speech component quality and much better in residual noise component quality. Most importantly, our new CI-DNN generalized best to an unseen noise type, if compared to the other tested deep learning approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Decentralized Multi-Agent Deep Reinforcement Learning in Swarms of Drones for Flood Monitoring.\n \n \n \n \n\n\n \n Baldazo, D.; Parras, J.; and Zazo, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DecentralizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903067,\n  author = {D. Baldazo and J. Parras and S. Zazo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Decentralized Multi-Agent Deep Reinforcement Learning in Swarms of Drones for Flood Monitoring},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Multi-Agent Deep Reinforcement Learning is becoming a promising approach to the problem of coordination of swarms of drones in dynamic systems. In particular, the use of autonomous aircraft for flood monitoring is now regarded as an economically viable option and it can benefit from this kind of automation: swarms of unmanned aerial vehicles could autonomously generate nearly real-time inundation maps that could improve relief work planning. In this work, we study the use of Deep Q-Networks (DQN) as the optimization strategy for the trajectory planning that is required for monitoring floods, we train agents over simulated floods in procedurally generated terrain and demonstrate good performance with two different reward schemes.},\n  keywords = {autonomous aerial vehicles;control engineering computing;emergency management;floods;learning (artificial intelligence);multi-agent systems;optimisation;trajectory control;relief work planning;Deep Q-Networks;flood monitoring;dynamic systems;autonomous aircraft;real-time inundation maps;decentralized multiagent deep reinforcement learning;drones swarms coordination problem;unmanned aerial vehicles swarms;optimization strategy;trajectory planning;Aircraft;Training;Floods;Reinforcement learning;Atmospheric modeling;Europe;Signal processing;navigation;reinforcement learning;swarms;decentralized control;floods},\n  doi = {10.23919/EUSIPCO.2019.8903067},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533953.pdf},\n}\n\n
\n
\n\n\n
\n Multi-Agent Deep Reinforcement Learning is becoming a promising approach to the problem of coordination of swarms of drones in dynamic systems. In particular, the use of autonomous aircraft for flood monitoring is now regarded as an economically viable option and it can benefit from this kind of automation: swarms of unmanned aerial vehicles could autonomously generate nearly real-time inundation maps that could improve relief work planning. In this work, we study the use of Deep Q-Networks (DQN) as the optimization strategy for the trajectory planning that is required for monitoring floods, we train agents over simulated floods in procedurally generated terrain and demonstrate good performance with two different reward schemes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Providing Spatial Control in Personal Sound Zones Using Graph Signal Processing.\n \n \n \n \n\n\n \n Molés-Cases, V.; Piũero, G.; Gonzalez, A.; and de Diego , M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ProvidingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903068,\n  author = {V. Molés-Cases and G. Piũero and A. Gonzalez and M. {de Diego}},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Providing Spatial Control in Personal Sound Zones Using Graph Signal Processing},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The following topics are dealt with: learning (artificial intelligence); feature extraction; convolutional neural nets; array signal processing; neural nets; image classification; optimisation; iterative methods; acoustic signal processing; direction-of-arrival estimation.},\n  keywords = {array signal processing;convolutional neural nets;feature extraction;learning (artificial intelligence);direction-of-arrival estimation;acoustic signal processing;iterative methods;optimisation;image classification;neural nets;array signal processing;convolutional neural nets;feature extraction;learning (artificial intelligence);Acoustics;Potential energy;Signal processing algorithms;Loudspeakers;TV;Cost function;Signal processing;Sound zones;pressure matching;spatial control;graph total variation},\n  doi = {10.23919/EUSIPCO.2019.8903068},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532798.pdf},\n}\n\n
\n
\n\n\n
\n The following topics are dealt with: learning (artificial intelligence); feature extraction; convolutional neural nets; array signal processing; neural nets; image classification; optimisation; iterative methods; acoustic signal processing; direction-of-arrival estimation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Indoor Sound Source Localization based on Sparse Bayesian Learning and Compressed Data.\n \n \n \n \n\n\n \n Bai, Z.; Sun, J.; Jensen, J. R.; and Christensen, M. G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"IndoorPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903069,\n  author = {Z. Bai and J. Sun and J. R. Jensen and M. G. Christensen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Indoor Sound Source Localization based on Sparse Bayesian Learning and Compressed Data},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, the problems of indoor sound source localization using a wireless acoustic sensor network are addressed and a new sparse Bayesian learning based algorithm is proposed. Using time delays for the direct paths from candidate source locations to microphone nodes, the proposed algorithm estimates the most likely source location. To reduce the amount of data that must be exchanged between microphone nodes, a Gaussian measurement matrix is multiplied on to each channel and the proposed method operates directly on the compressed data. This is achieved by exploiting sparsity in both the frequency and space domains. The performance is analysed in numerical simulations, where the performance as a function of the reverberation times in investigated, and the results show that the proposed algorithm is robust to reverberation.},\n  keywords = {acoustic generators;acoustic signal processing;Bayes methods;learning (artificial intelligence);microphone arrays;microphones;reverberation;telecommunication computing;wireless sensor networks;indoor sound source localization;compressed data;wireless acoustic sensor network;sparse Bayesian learning based algorithm;candidate source locations;microphone nodes;source location;time delays;direct paths;Gaussian measurement matrix;space domains;frequency domains;reverberation times;Microphones;Arrays;Sparse matrices;Delay effects;Position measurement;Frequency estimation;Bayes methods;Sound Source Localization;Sparse Bayesian Learning;Array Signal Processing;Reverberation Environment},\n  doi = {10.23919/EUSIPCO.2019.8903069},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529008.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, the problems of indoor sound source localization using a wireless acoustic sensor network are addressed and a new sparse Bayesian learning based algorithm is proposed. Using time delays for the direct paths from candidate source locations to microphone nodes, the proposed algorithm estimates the most likely source location. To reduce the amount of data that must be exchanged between microphone nodes, a Gaussian measurement matrix is multiplied on to each channel and the proposed method operates directly on the compressed data. This is achieved by exploiting sparsity in both the frequency and space domains. The performance is analysed in numerical simulations, where the performance as a function of the reverberation times in investigated, and the results show that the proposed algorithm is robust to reverberation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Feasibility of LTE for Train Control in Subway Environments Based on Experimental Data.\n \n \n \n \n\n\n \n Carro-Lagoa, Á.; Domínguez-Bolaño, T.; Rodríguez-Piñeiro, J.; González-López, M.; and García-Naya, J. A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FeasibilityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903070,\n  author = {Á. Carro-Lagoa and T. Domínguez-Bolaño and J. Rodríguez-Piñeiro and M. González-López and J. A. García-Naya},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Feasibility of LTE for Train Control in Subway Environments Based on Experimental Data},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Modern railway transportation systems need a reliable communication infrastructure providing very high data rates and low latencies for applications such as communications-based train control (CBTC) or video surveillance. In this paper, the suitability of the Long Term Evolution (LTE) standard for subway environments is evaluated using a propagation model based on real channel measurements. An LTE deployment with several subway stations is simulated using the ns-3 discrete-event network simulator to evaluate both the system performance and the fulfillment of the quality of service (QoS) requirements of representative services such as CBTC, closed-circuit television (CCTV), voice over IP (VoIP) and file transfer. Several parameters and procedures of the LTE system are adapted to the subway environment: the LTE network is configured using a QoS-aware scheduler, hence critical services are prioritized; the handover procedure is tuned to avoid ping-pong effects; and inter-cell interference coordination techniques are also applied.},\n  keywords = {cellular radio;closed circuit television;Internet telephony;Long Term Evolution;mobility management (mobile radio);quality of service;radiofrequency interference;railway communication;railway safety;telecommunication scheduling;video surveillance;LTE network;train control;subway environment;modern railway transportation systems;reliable communication infrastructure;high data rates;low latencies;CBTC;video surveillance;Long Term Evolution standard;LTE deployment;subway stations;discrete-event network simulator;system performance;service requirements;representative services;LTE system;Long Term Evolution;Quality of service;Public transportation;Antenna measurements;Propagation losses;Interference;Channel models;LTE;CBTC;ns-3},\n  doi = {10.23919/EUSIPCO.2019.8903070},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529379.pdf},\n}\n\n
\n
\n\n\n
\n Modern railway transportation systems need a reliable communication infrastructure providing very high data rates and low latencies for applications such as communications-based train control (CBTC) or video surveillance. In this paper, the suitability of the Long Term Evolution (LTE) standard for subway environments is evaluated using a propagation model based on real channel measurements. An LTE deployment with several subway stations is simulated using the ns-3 discrete-event network simulator to evaluate both the system performance and the fulfillment of the quality of service (QoS) requirements of representative services such as CBTC, closed-circuit television (CCTV), voice over IP (VoIP) and file transfer. Several parameters and procedures of the LTE system are adapted to the subway environment: the LTE network is configured using a QoS-aware scheduler, hence critical services are prioritized; the handover procedure is tuned to avoid ping-pong effects; and inter-cell interference coordination techniques are also applied.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Precise Response Control of Transmit-Receive Two-Dimensional Beampattern in FDA-MIMO Radar.\n \n \n \n \n\n\n \n Lan, L.; Xu, J.; Liao, G.; Zhang, Y.; and Huang, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"PrecisePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903071,\n  author = {L. Lan and J. Xu and G. Liao and Y. Zhang and Y. Huang},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Precise Response Control of Transmit-Receive Two-Dimensional Beampattern in FDA-MIMO Radar},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Frequency diverse array (FDA) has sparked intense interest in recent years because of its range-angle-dependent beampattern. By combining with the multiple-input and multiple-output (MIMO) technique, additional degrees of freedom (DOFs) are provided. However, dynamic environments require the design of a precisely controlled beampattern to enhance the robustness of the radar system. In this paper, the precise response control (PRC) algorithm is studied in the FDA-MIMO radar by imposing artificial interferences within rectangular regions in the joint transmit-receive spatial frequency domain. The algorithm is performed via two stages. In the first stage, artificial interferences are concurrently imposed to iteratively adjust the responses. In the second stage, extra artificial interferences are added to satisfy the predefined response requirement for all regions. The jammer-plus-noise covariance matrix is constructed accordingly and the weight vector is updated. Particularly, the jammer-to-noise ratio (JNR) of each artificial interference is figured out. Numerical results are provided to corroborate the performance of the transmit-receive two-dimensional beampattern in FDA-MIMO.},\n  keywords = {antenna phased arrays;array signal processing;covariance matrices;interference suppression;iterative methods;MIMO radar;radar antennas;radar interference;radar signal processing;FDA-MIMO radar;frequency diverse array;intense interest;range-angle-dependent beampattern;additional degrees;dynamic environments;precisely controlled beampattern;radar system;precise response control algorithm;artificial interference;joint transmit-receive spatial frequency domain;extra artificial interferences;predefined response requirement;transmit-receive two-dimensional beampattern;Radar;Interference;Jamming;Covariance matrices;Signal processing algorithms;Europe;Signal processing;FDA-MMO radar;artificial interference;two-dimensional beampattern;rectangular regions;jammer-plus-noise covariance matrix;joint transmit-receive spatial frequency domain},\n  doi = {10.23919/EUSIPCO.2019.8903071},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530094.pdf},\n}\n\n
\n
\n\n\n
\n Frequency diverse array (FDA) has sparked intense interest in recent years because of its range-angle-dependent beampattern. By combining with the multiple-input and multiple-output (MIMO) technique, additional degrees of freedom (DOFs) are provided. However, dynamic environments require the design of a precisely controlled beampattern to enhance the robustness of the radar system. In this paper, the precise response control (PRC) algorithm is studied in the FDA-MIMO radar by imposing artificial interferences within rectangular regions in the joint transmit-receive spatial frequency domain. The algorithm is performed via two stages. In the first stage, artificial interferences are concurrently imposed to iteratively adjust the responses. In the second stage, extra artificial interferences are added to satisfy the predefined response requirement for all regions. The jammer-plus-noise covariance matrix is constructed accordingly and the weight vector is updated. Particularly, the jammer-to-noise ratio (JNR) of each artificial interference is figured out. Numerical results are provided to corroborate the performance of the transmit-receive two-dimensional beampattern in FDA-MIMO.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Composite Discriminator for Generative Adversarial Network based Video Super-Resolution.\n \n \n \n \n\n\n \n Wang, X.; Lucas, A.; Lopez-Tapia, S.; Wu, X.; Molina, R.; and Katsaggelos, A. K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903072,\n  author = {X. Wang and A. Lucas and S. Lopez-Tapia and X. Wu and R. Molina and A. K. Katsaggelos},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Composite Discriminator for Generative Adversarial Network based Video Super-Resolution},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Generative Adversarial Networks (GANs) have been used for solving the video super-resolution problem. So far, video super-resolution GAN-based methods use the traditional GAN framework which consists of a single generator and a single discriminator that are trained against each other. In this work we propose a new framework which incorporates two collaborative discriminators whose aim is to jointly improve the quality of the reconstructed video sequence. While one discriminator concentrates on general properties of the images, the second one specializes on obtaining realistically reconstructed features, such as, edges. Experiments results demonstrate that the learned model outperforms current state of the art models and obtains super-resolved frames, with fine details, sharp edges, and fewer artifacts.},\n  keywords = {image reconstruction;image resolution;image sequences;learning (artificial intelligence);video signal processing;composite discriminator;generative adversarial networks;video super-resolution problem;video super-resolution GAN-based methods;reconstructed video sequence;super-resolved frames;collaborative discriminators;GAN framework;Generators;Training;Gallium nitride;Image edge detection;Generative adversarial networks;Video Super-Resolution;Spatially Adaptive;Generative Adversarial Networks;the Composite Discriminator},\n  doi = {10.23919/EUSIPCO.2019.8903072},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534004.pdf},\n}\n\n
\n
\n\n\n
\n Generative Adversarial Networks (GANs) have been used for solving the video super-resolution problem. So far, video super-resolution GAN-based methods use the traditional GAN framework which consists of a single generator and a single discriminator that are trained against each other. In this work we propose a new framework which incorporates two collaborative discriminators whose aim is to jointly improve the quality of the reconstructed video sequence. While one discriminator concentrates on general properties of the images, the second one specializes on obtaining realistically reconstructed features, such as, edges. Experiments results demonstrate that the learned model outperforms current state of the art models and obtains super-resolved frames, with fine details, sharp edges, and fewer artifacts.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Electromagnetic imaging of a dielectric micro-structure via convolutional neural networks.\n \n \n \n \n\n\n \n Ran, P.; Qin, Y.; and Lesselier, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ElectromagneticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903073,\n  author = {P. Ran and Y. Qin and D. Lesselier},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Electromagnetic imaging of a dielectric micro-structure via convolutional neural networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Convolutional neural networks (CNN) are applied to the time-harmonic electromagnetic diagnostic of a dielectric micro-structure. The latter consists of a finite number of circular cylinders (rods) with a fraction of wavelength radius that are set parallel to and at sub-wavelength distance from one another. Discrete scattered fields are made available around it in a free-space multisource-multireceiver configuration. The aim is to characterize this micro-structure, like positions of rods or their absence, and in effect to map their dielectric contrasts w.r.t. the embedding space. A computationally efficient field representation based on a method of moments (MoM) is available to model the field. Iterative, sparsity-constrained solutions work well to find missing rods, but may lack generality and need strong priors. As for time-reversal and like noniterative solutions, they may fail to capture the scattering complexity. These limitations can be alleviated by relying on deep learning concepts, here via convolutional neural networks. How to construct the inverse solver is focused onto. Representative numerical tests illustrate the performance of the approach in typical situations. Comparisons with results from a contrast-source inversion (CSI) introduced in parallel are performed. Emphasis is on potential super-resolution in harmony with subwavelength features of the micro-structure.},\n  keywords = {convolutional neural nets;electromagnetic wave scattering;inverse problems;iterative methods;learning (artificial intelligence);method of moments;electromagnetic imaging;dielectric microstructure;convolutional neural networks;time-harmonic;wavelength radius;subwavelength distance;discrete scattered fields;free-space multisource-multireceiver configuration;dielectric contrasts;computationally efficient field representation;missing rods;CNN;Permittivity;Inverse problems;Nickel;Dielectrics;Mathematical model;Europe;Signal processing;convolutional neural networks;micro-structure;super-resolution imaging;inverse scattering problem},\n  doi = {10.23919/EUSIPCO.2019.8903073},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531220.pdf},\n}\n\n
\n
\n\n\n
\n Convolutional neural networks (CNN) are applied to the time-harmonic electromagnetic diagnostic of a dielectric micro-structure. The latter consists of a finite number of circular cylinders (rods) with a fraction of wavelength radius that are set parallel to and at sub-wavelength distance from one another. Discrete scattered fields are made available around it in a free-space multisource-multireceiver configuration. The aim is to characterize this micro-structure, like positions of rods or their absence, and in effect to map their dielectric contrasts w.r.t. the embedding space. A computationally efficient field representation based on a method of moments (MoM) is available to model the field. Iterative, sparsity-constrained solutions work well to find missing rods, but may lack generality and need strong priors. As for time-reversal and like noniterative solutions, they may fail to capture the scattering complexity. These limitations can be alleviated by relying on deep learning concepts, here via convolutional neural networks. How to construct the inverse solver is focused onto. Representative numerical tests illustrate the performance of the approach in typical situations. Comparisons with results from a contrast-source inversion (CSI) introduced in parallel are performed. Emphasis is on potential super-resolution in harmony with subwavelength features of the micro-structure.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Laplace Nonnegative Matrix Factorization with Application to Semi-supervised Audio Denoising.\n \n \n \n \n\n\n \n Tanji, H.; Murakami, T.; and Kamata, H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"LaplacePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903074,\n  author = {H. Tanji and T. Murakami and H. Kamata},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Laplace Nonnegative Matrix Factorization with Application to Semi-supervised Audio Denoising},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper proposes two statistical models for the nonnegative matrix factorization (NMF) based on heavy-tailed distributions. In the NMF for acoustic signals, previous works justify the additivity of an observed spectrogram using the reproductive property of a probability density function. However, the effectiveness of these properties is not clear. Consequently, to construct a model robust to noise, statistical models based on heavy-tailed distributions are recently growing up. In this paper, as heavy-tailed models for the NMF, we introduce statistical models based on the complex Laplace distributions, and call them Laplace-NMF. Moreover, we derive convergence-guaranteed optimization algorithms to estimate parameters. From our formulation, a statistical interpretation of the Itakura-Saito (IS) divergence-based NMF is newly revealed. We confirm the effectiveness of Laplace-NMF in semi-supervised audio denoising.},\n  keywords = {audio signal processing;matrix decomposition;probability;signal denoising;source separation;statistical analysis;Laplace nonnegative matrix factorization;semisupervised audio denoising;statistical models;heavy-tailed distributions;acoustic signals;observed spectrogram;reproductive property;probability density function;heavy-tailed models;complex Laplace distributions;Laplace-NMF;statistical interpretation;divergence-based NMF;Manganese;Spectrogram;Cost function;Gaussian distribution;Noise reduction;Signal processing algorithms;Probability density function;complex Laplace distribution;nonnegative matrix factorization;majorization-minimization algorithm;source separation},\n  doi = {10.23919/EUSIPCO.2019.8903074},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530733.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes two statistical models for the nonnegative matrix factorization (NMF) based on heavy-tailed distributions. In the NMF for acoustic signals, previous works justify the additivity of an observed spectrogram using the reproductive property of a probability density function. However, the effectiveness of these properties is not clear. Consequently, to construct a model robust to noise, statistical models based on heavy-tailed distributions are recently growing up. In this paper, as heavy-tailed models for the NMF, we introduce statistical models based on the complex Laplace distributions, and call them Laplace-NMF. Moreover, we derive convergence-guaranteed optimization algorithms to estimate parameters. From our formulation, a statistical interpretation of the Itakura-Saito (IS) divergence-based NMF is newly revealed. We confirm the effectiveness of Laplace-NMF in semi-supervised audio denoising.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Subspace-Based Method for Direction Estimation of Coherent Signals with Arbitrary Linear Array.\n \n \n \n \n\n\n \n Chen, X.; Xin, J.; Zuo, W.; Li, J.; Zheng, N.; and Sano, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Subspace-BasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903075,\n  author = {X. Chen and J. Xin and W. Zuo and J. Li and N. Zheng and A. Sano},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Subspace-Based Method for Direction Estimation of Coherent Signals with Arbitrary Linear Array},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we propose an interpolated computationally efficient subspace-based method without eigendecomposition (ISUMWE) for the direction-of-arrivals (DOA) estimation of narrowband coherent signals in arbitrary linear arrays. ISUMWE estimates DOA based on the outputs of the virtual array by using an interpolation transform technique. Therefore, it overcomes the common restriction of uniform linear array (ULA) geometry when estimating coherent signals and becomes suitable for more general array geometry than ordinary methods. Meanwhile, the coherency of incident signals is decorrelated through a linear operation of a matrix formed from the cross-correlations between some sensor data in a designed virtual array which can be computed from the linear transformation of sensor data in the real array, where the effect of additive noise is eliminated. Consequently, the DOA can be estimated without performing eigendecomposition, and the noise pre-whitening which is required in traditional interpolation procedure can be avoided. As a result, the ISUMWE extends the application of the original subspace-based method without eigendecomposition (SUMWE) into arbitrary linear array with high accuracy and low computational complexity. The numerical results demonstrate the validity of the proposed method.},\n  keywords = {array signal processing;computational complexity;direction-of-arrival estimation;interpolation;ISUMWE;direction-of-arrivals estimation;DOA;narrowband coherent signals;uniform linear array geometry;general array geometry;sensor data;virtual array;linear transformation;interpolated computationally efficient subspace-based method without eigendecomposition;interpolation transform technique;computational complexity;Estimation;Direction-of-arrival estimation;Interpolation;Sensor arrays;Correlation;Array signal processing;Direction-of-arrival;coherent signals;computationally efficient;array interpolation},\n  doi = {10.23919/EUSIPCO.2019.8903075},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532433.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose an interpolated computationally efficient subspace-based method without eigendecomposition (ISUMWE) for the direction-of-arrivals (DOA) estimation of narrowband coherent signals in arbitrary linear arrays. ISUMWE estimates DOA based on the outputs of the virtual array by using an interpolation transform technique. Therefore, it overcomes the common restriction of uniform linear array (ULA) geometry when estimating coherent signals and becomes suitable for more general array geometry than ordinary methods. Meanwhile, the coherency of incident signals is decorrelated through a linear operation of a matrix formed from the cross-correlations between some sensor data in a designed virtual array which can be computed from the linear transformation of sensor data in the real array, where the effect of additive noise is eliminated. Consequently, the DOA can be estimated without performing eigendecomposition, and the noise pre-whitening which is required in traditional interpolation procedure can be avoided. As a result, the ISUMWE extends the application of the original subspace-based method without eigendecomposition (SUMWE) into arbitrary linear array with high accuracy and low computational complexity. The numerical results demonstrate the validity of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Performance Analysis of Diffusion Filtered-X Algorithms in Multitask ANC Systems.\n \n \n \n \n\n\n \n Chu, Y. J.; Chan, S. C.; Zhao, Y.; and Wu, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"PerformancePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903076,\n  author = {Y. J. Chu and S. C. Chan and Y. Zhao and M. Wu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Performance Analysis of Diffusion Filtered-X Algorithms in Multitask ANC Systems},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The centralized control for multi-channel active noise control (ANC) systems usually cost considerable processing power due to the transfer functions between a large number of loudspeakers and error microphones; while the decentralized control has the increased risk of global instability. Distribution of the controller network could save computational burden while maintaining satisfactory performance. A lot of ANC algorithms employing diffusion control have been proposed recently. The communication within diffusion networks introduces the estimation bias of each controller and makes it difficult to analyze the performance of the entire system. This paper analyzes the performance of the diffusion ANC network based on a family of diffusion filtered-x (Fx) algorithms employing either the single- or multiple-measurement. Difference equations describing the mean and mean squares convergence behaviors of these ANC systems are derived to characterize its optimal solution, estimation bias and variance. Simulations have been conducted to compare the performance of diffusion Fx algorithms and the effectiveness of the theoretical analysis.},\n  keywords = {active noise control;decentralised control;difference equations;loudspeakers;transfer functions;global instability;controller network;computational burden;ANC algorithms;diffusion control;estimation bias;diffusion ANC network;single-measurement;multiple-measurement;mean squares convergence;diffusion Fx algorithms;theoretical analysis;performance analysis;diffusion filtered-x algorithms;multitask ANC systems;centralized control;multichannel active noise control;transfer functions;error microphones;decentralized control;Microphones;Loudspeakers;Convergence;Performance analysis;Network topology;Covariance matrices;Filtering algorithms;active noise control (ANC);diffusion control;multitask problem;and performance analysis},\n  doi = {10.23919/EUSIPCO.2019.8903076},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528108.pdf},\n}\n\n
\n
\n\n\n
\n The centralized control for multi-channel active noise control (ANC) systems usually cost considerable processing power due to the transfer functions between a large number of loudspeakers and error microphones; while the decentralized control has the increased risk of global instability. Distribution of the controller network could save computational burden while maintaining satisfactory performance. A lot of ANC algorithms employing diffusion control have been proposed recently. The communication within diffusion networks introduces the estimation bias of each controller and makes it difficult to analyze the performance of the entire system. This paper analyzes the performance of the diffusion ANC network based on a family of diffusion filtered-x (Fx) algorithms employing either the single- or multiple-measurement. Difference equations describing the mean and mean squares convergence behaviors of these ANC systems are derived to characterize its optimal solution, estimation bias and variance. Simulations have been conducted to compare the performance of diffusion Fx algorithms and the effectiveness of the theoretical analysis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust Self-Localization of Microphone Arrays Using a Minimum Number of Acoustic Sources.\n \n \n \n \n\n\n \n Schrammen, M.; Hamad, A.; and Jax, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903077,\n  author = {M. Schrammen and A. Hamad and P. Jax},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust Self-Localization of Microphone Arrays Using a Minimum Number of Acoustic Sources},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Multi-microphone signal processing is becoming increasingly popular in applications such as distant speech recognition or communication in adverse environments. To deploy source localization or signal enhancement algorithms like beamforming the locations of the microphones must be known. One well-studied approach to retrieve the relative positions of the microphones is based on time-difference-of-arrival (TDoA) measurements. However, current approaches are restricted to scenarios with a large number of sources or specific coherence assumptions. In this paper a non-iterative approach based on orthogonal geometric projection (OGP), which is able to perform a blind self-localization of the array in 2D with only two sources at arbitrary positions, is presented and extended to estimate a 3D array shape with only three sources. Furthermore, an efficient method for outlier correction in the pairwise distance (PD) estimates is proposed, that significantly reduces the position error.},\n  keywords = {acoustic signal processing;array signal processing;direction-of-arrival estimation;microphone arrays;microphones;speech recognition;time-of-arrival estimation;robust self-localization;microphone arrays;acoustic sources;multimicrophone signal processing;distant speech recognition;source localization;time-difference-of-arrival measurements;orthogonal geometric projection;blind self-localization;Shape;Estimation;Microphone arrays;Two dimensional displays;Three-dimensional displays;Signal processing;array shape estimation;self-localization;geometry calibration;time-difference-of-arrival;acoustic sensor networks},\n  doi = {10.23919/EUSIPCO.2019.8903077},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534131.pdf},\n}\n\n
\n
\n\n\n
\n Multi-microphone signal processing is becoming increasingly popular in applications such as distant speech recognition or communication in adverse environments. To deploy source localization or signal enhancement algorithms like beamforming the locations of the microphones must be known. One well-studied approach to retrieve the relative positions of the microphones is based on time-difference-of-arrival (TDoA) measurements. However, current approaches are restricted to scenarios with a large number of sources or specific coherence assumptions. In this paper a non-iterative approach based on orthogonal geometric projection (OGP), which is able to perform a blind self-localization of the array in 2D with only two sources at arbitrary positions, is presented and extended to estimate a 3D array shape with only three sources. Furthermore, an efficient method for outlier correction in the pairwise distance (PD) estimates is proposed, that significantly reduces the position error.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Dyadic Particle Filter for Price Prediction.\n \n \n \n \n\n\n \n Ntemi, M.; and Kotropoulos, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903078,\n  author = {M. Ntemi and C. Kotropoulos},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Dyadic Particle Filter for Price Prediction},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The most difficult task in financial forecasting is the accurate price prediction based on previous values. Two cases are studied: stock price prediction and flight price prediction. A dyadic particle filter is proposed that is based on sequential importance resampling. This dyadic particle filter captures the dynamic evolution of a pair of latent vectors. In stock price prediction, one latent vector is defined for each stock. This latent vector is paired with a market segment latent vector introduced for each group of companies of the same category. Both latent vectors capture the hidden information of the stock market and reinforce the state estimation procedure. This hidden information influences strongly the performance of the particle filter, yielding more accurate prediction of stock prices than the state-of-the-art techniques. For flight price prediction, the pair of latent vectors corresponds to route and destination, respectively. Given the price range of each flight, promising results are disclosed.},\n  keywords = {particle filtering (numerical methods);pricing;sampling methods;state estimation;stock markets;dyadic particle filter;stock price prediction;flight price prediction;market segment latent vector;hidden information;stock market;latent vectors corresponds;price range;sequential importance resampling;latent vector;state estimation procedure;Covariance matrices;Europe;Signal processing;Signal processing algorithms;Random variables;Monte Carlo methods;Estimation},\n  doi = {10.23919/EUSIPCO.2019.8903078},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533051.pdf},\n}\n\n
\n
\n\n\n
\n The most difficult task in financial forecasting is the accurate price prediction based on previous values. Two cases are studied: stock price prediction and flight price prediction. A dyadic particle filter is proposed that is based on sequential importance resampling. This dyadic particle filter captures the dynamic evolution of a pair of latent vectors. In stock price prediction, one latent vector is defined for each stock. This latent vector is paired with a market segment latent vector introduced for each group of companies of the same category. Both latent vectors capture the hidden information of the stock market and reinforce the state estimation procedure. This hidden information influences strongly the performance of the particle filter, yielding more accurate prediction of stock prices than the state-of-the-art techniques. For flight price prediction, the pair of latent vectors corresponds to route and destination, respectively. Given the price range of each flight, promising results are disclosed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Neuro-Inspired Compression of RGB Images.\n \n \n \n \n\n\n \n Doutsi, E.; Tzagkarakis, G.; and Tsakalides, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Neuro-InspiredPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903079,\n  author = {E. Doutsi and G. Tzagkarakis and P. Tsakalides},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Neuro-Inspired Compression of RGB Images},\n  year = {2019},\n  pages = {1-5},\n  abstract = {During the last decade, there is an ever increasing interest about the decryption and analysis of the human visual system, which offers an intelligent mechanism for capturing and transforming the visual stimulus into a very dense and informative code of spikes. The compression capacity of the visual system is beyond the latest image and video compression standards, motivating the image processing community to investigate whether a neuro-inspired system, that performs according to the visual system, could outperform the state-of the-art image compression methods. Inspired by neuroscience models, this paper proposes for a first time a neuro-inspired compression method for RGB images. Specifically, each color channel is processed by a retina-inspired filter combined with a compression scheme based on spikes. We demonstrate that, even for a very small number of bits per pixel (bpp), our proposed compression system is capable of extracting faithful and exact knowledge from the input scene, compared against the JPEG that generates strong artifacts. To evaluate the performance of the proposed algorithm we use Full-Reference (FR) and No-Reference (NR) Image Quality Assessments (IQA). We further validate the performance improvements by applying an edge detector on the decompressed images, illustrating that contour extraction is much more precise for the images compressed via our neuro-inspired algorithm.},\n  keywords = {cryptography;data compression;edge detection;feature extraction;image colour analysis;image filtering;video coding;color channel;human visual system analysis;contour extraction;edge detector;FR IQA;NR IQA;full-reference image quality assessments;JPEG;neuroscience models;neuro-inspired algorithm;decompressed images;No-Reference Image Quality Assessments;compression system;retina-inspired filter;neuro-inspired compression method;image compression methods;neuro-inspired system;image processing community;video compression standards;compression capacity;informative code;dense code;visual stimulus;intelligent mechanism;decryption;RGB images;Image coding;Transform coding;Visualization;Image color analysis;Signal processing algorithms;Image reconstruction;Visual systems;Retina-inspired filter;Leaky Integrate-and-Fire model;spikes;FR-IQA;NR-IQA;edge detection},\n  doi = {10.23919/EUSIPCO.2019.8903079},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533867.pdf},\n}\n\n
\n
\n\n\n
\n During the last decade, there is an ever increasing interest about the decryption and analysis of the human visual system, which offers an intelligent mechanism for capturing and transforming the visual stimulus into a very dense and informative code of spikes. The compression capacity of the visual system is beyond the latest image and video compression standards, motivating the image processing community to investigate whether a neuro-inspired system, that performs according to the visual system, could outperform the state-of the-art image compression methods. Inspired by neuroscience models, this paper proposes for a first time a neuro-inspired compression method for RGB images. Specifically, each color channel is processed by a retina-inspired filter combined with a compression scheme based on spikes. We demonstrate that, even for a very small number of bits per pixel (bpp), our proposed compression system is capable of extracting faithful and exact knowledge from the input scene, compared against the JPEG that generates strong artifacts. To evaluate the performance of the proposed algorithm we use Full-Reference (FR) and No-Reference (NR) Image Quality Assessments (IQA). We further validate the performance improvements by applying an edge detector on the decompressed images, illustrating that contour extraction is much more precise for the images compressed via our neuro-inspired algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient Feature Extraction for Person Re-Identification via Distillation.\n \n \n \n \n\n\n \n Salgado, F.; Mehta, R.; and Correia, P. L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903080,\n  author = {F. Salgado and R. Mehta and P. L. Correia},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient Feature Extraction for Person Re-Identification via Distillation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Person re-identification has received increasing attention due to the high performance achieved by new methods based on deep learning. With larger networks of cameras being deployed, more surveillance videos need to be parsed, and extracting features for each frame remains a bottleneck. In addition, the feature extraction needs to be robust to images captured in a variety of scenarios. We propose using deep neural network distillation for training a feature extractor with a lower computational cost, while keeping track of its cross-domain ability. In the end, the proposed model is three times faster, without a decrease in accuracy. Results are validated on two popular person re-identification benchmark datasets and compared to a solution using ResNet.},\n  keywords = {feature extraction;neural nets;video surveillance;feature extraction;deep learning;surveillance videos;deep neural network distillation;cross-domain ability;person reidentification benchmark datasets;ResNet;Feature extraction;Training;Computational modeling;Computer architecture;Cameras;Europe;Signal processing;person re-identification;cross-domain;distillation;convolutional neural networks},\n  doi = {10.23919/EUSIPCO.2019.8903080},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528044.pdf},\n}\n\n
\n
\n\n\n
\n Person re-identification has received increasing attention due to the high performance achieved by new methods based on deep learning. With larger networks of cameras being deployed, more surveillance videos need to be parsed, and extracting features for each frame remains a bottleneck. In addition, the feature extraction needs to be robust to images captured in a variety of scenarios. We propose using deep neural network distillation for training a feature extractor with a lower computational cost, while keeping track of its cross-domain ability. In the end, the proposed model is three times faster, without a decrease in accuracy. Results are validated on two popular person re-identification benchmark datasets and compared to a solution using ResNet.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast CU Size Decisions for HEVC Inter-Prediction Using Support Vector Machines.\n \n \n \n \n\n\n \n Erabadda, B.; Mallikarachchi, T.; Kulupana, G.; and Fernando, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903081,\n  author = {B. Erabadda and T. Mallikarachchi and G. Kulupana and A. Fernando},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast CU Size Decisions for HEVC Inter-Prediction Using Support Vector Machines},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The brute force rate-distortion optimisation based approach used in the High Efficiency Video Coding(HEVC) encoders to determine the best block partitioning structure for a given content demands an excessive amount of computational resources. In this context, this paper proposes a novel algorithm to reduce the computational complexity of HEVC inter-prediction using Support Vector Machines. The proposed algorithm predicts the Coding Unit (CU) split decision of a particular block enabling the encoder to directly encode the selected block, avoiding the unnecessary evaluation of the remaining CU size combinations. Experimental results demonstrate encoding time reductions of ~58% ~50%2.27%1.89% Bjøntegaard Delta Bit Rate (BDBR) losses for Random Access and Low-Delay B configurations, respectively.},\n  keywords = {computational complexity;encoding;optimisation;rate distortion theory;support vector machines;video coding;fast CU size decisions;HEVC inter-prediction;support vector machines;brute force rate-distortion optimisation;block partitioning structure;computational complexity;coding unit split decision;high efficiency video coding encoders;Bjøntegaard delta bit rate losses;BDBR;Encoding;Prediction algorithms;Support vector machines;Predictive models;Streaming media;Training;Partitioning algorithms;High Efficiency Video Coding (HEVC);interprediction;Coding Unit (CU);Support Vector Machine (SVM);encoding complexity reduction},\n  doi = {10.23919/EUSIPCO.2019.8903081},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533413.pdf},\n}\n\n
\n
\n\n\n
\n The brute force rate-distortion optimisation based approach used in the High Efficiency Video Coding(HEVC) encoders to determine the best block partitioning structure for a given content demands an excessive amount of computational resources. In this context, this paper proposes a novel algorithm to reduce the computational complexity of HEVC inter-prediction using Support Vector Machines. The proposed algorithm predicts the Coding Unit (CU) split decision of a particular block enabling the encoder to directly encode the selected block, avoiding the unnecessary evaluation of the remaining CU size combinations. Experimental results demonstrate encoding time reductions of  58%  50%2.27%1.89% Bjøntegaard Delta Bit Rate (BDBR) losses for Random Access and Low-Delay B configurations, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Auto-calibration of Uniform Linear Array Antennas.\n \n \n \n \n\n\n \n McKelvey, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Auto-calibrationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903082,\n  author = {T. McKelvey},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Auto-calibration of Uniform Linear Array Antennas},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Calibration is instrumental to realize the full performance of a measurement system. In this contribution we consider the calibration of a uniformly linear array antenna where we assume each antenna element has an unknown complex gain. We present an algorithm which can be used to calibrate the array without full knowledge of the environment. Particularly, if the number of signal sources are known we show that we can determine the individual unknown antenna gains up to an ambiguity parametrized by a single complex scalar. If the ratio of the complex gains between two consecutive elements is also known, this ambiguity is resolved. The method is based on determining the antenna calibration parameters such that the Hankel matrix of the array snapshots has a given rank. A numerical example illustrates the performance of the method. The numerical results suggest that the method is consistent in SNR.},\n  keywords = {array signal processing;calibration;direction-of-arrival estimation;linear antenna arrays;auto-calibration;uniform linear array antennas;measurement system;uniformly linear array;antenna element;unknown complex gain;signal sources;individual unknown antenna gains;single complex scalar;complex gains;consecutive elements;antenna calibration parameters;array snapshots;Linear antenna arrays;Calibration;Directive antennas;Direction-of-arrival estimation;Radar antennas;Estimation;Optimization;Calibration;Linear antenna arrays;Direction-of-arrival estimation},\n  doi = {10.23919/EUSIPCO.2019.8903082},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533794.pdf},\n}\n\n
\n
\n\n\n
\n Calibration is instrumental to realize the full performance of a measurement system. In this contribution we consider the calibration of a uniformly linear array antenna where we assume each antenna element has an unknown complex gain. We present an algorithm which can be used to calibrate the array without full knowledge of the environment. Particularly, if the number of signal sources are known we show that we can determine the individual unknown antenna gains up to an ambiguity parametrized by a single complex scalar. If the ratio of the complex gains between two consecutive elements is also known, this ambiguity is resolved. The method is based on determining the antenna calibration parameters such that the Hankel matrix of the array snapshots has a given rank. A numerical example illustrates the performance of the method. The numerical results suggest that the method is consistent in SNR.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Computational Acceleration and Smart Initialization of Full-RANK Spatial Covariance Analysis.\n \n \n \n \n\n\n \n Sawada, H.; Ikeshita, R.; Ito, N.; and Nakatani, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ComputationalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903083,\n  author = {H. Sawada and R. Ikeshita and N. Ito and T. Nakatani},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Computational Acceleration and Smart Initialization of Full-RANK Spatial Covariance Analysis},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Full-rank spatial covariance analysis (FCA) is a method for blind source separation. It is based on a model for observation mixtures with flexible source-related parameters, and an EM algorithm is known to optimize the parameters. FCA has the potential to obtain high-quality separations. However, the algorithm for FCA is computationally demanding and sensitive to initializations. This paper proposes two practical techniques to make effective use of FCA. The first one is to accelerate the execution of the algorithm by using single-instruction-multiple-data (SIMD) instructions run on a GPU. The second one is to initialize the parameters appropriately by scanning the observation mixtures. Experimental results show that high-quality separations were achieved for 6-second real-room speech mixtures (4 sources and 3 microphones) with a computational time of less than 8 seconds.},\n  keywords = {blind source separation;covariance analysis;graphics processing units;parallel processing;computational acceleration;FCA;blind source separation;flexible source-related parameters;EM algorithm;single-instruction-multiple-data instructions;real-room speech mixtures;full-RANK spatial covariance analysis;SIMD instructions;GPU;microphones;Covariance matrices;Signal processing algorithms;Acceleration;Computational modeling;Blind source separation;Graphics processing units;blind source separation (BSS);full-rank spatial covariance analysis (FCA);expectation-maximization (EM) algorithm;matrix inversion;single instruction multiple data (SIMD)},\n  doi = {10.23919/EUSIPCO.2019.8903083},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529461.pdf},\n}\n\n
\n
\n\n\n
\n Full-rank spatial covariance analysis (FCA) is a method for blind source separation. It is based on a model for observation mixtures with flexible source-related parameters, and an EM algorithm is known to optimize the parameters. FCA has the potential to obtain high-quality separations. However, the algorithm for FCA is computationally demanding and sensitive to initializations. This paper proposes two practical techniques to make effective use of FCA. The first one is to accelerate the execution of the algorithm by using single-instruction-multiple-data (SIMD) instructions run on a GPU. The second one is to initialize the parameters appropriately by scanning the observation mixtures. Experimental results show that high-quality separations were achieved for 6-second real-room speech mixtures (4 sources and 3 microphones) with a computational time of less than 8 seconds.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Variational Bayesian GAN.\n \n \n \n \n\n\n \n Chien, J. -.; and Kuo, C. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"VariationalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903084,\n  author = {J. -T. Chien and C. -L. Kuo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Variational Bayesian GAN},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Generative adversarial network (GAN) has been successfully developing as a generative model where the artificial data drawn from the generator are misrecognized as real samples by a discriminator. Although GAN achieves the desirable performance, the challenge is that the mode collapse easily happens in the joint optimization of generator and discriminator. This study copes with this challenge by improving the model regularization by means of representing the weight uncertainty in GAN. A new Bayesian GAN is formulated and implemented to learn a regularized model from diverse data where the strong modes are flattened via the marginalization and the issues of model collapse and gradient vanishing are alleviated. In particular, we present a variational GAN (VGAN) where the encoder, generator and discriminator are jointly estimated according to the variational Bayesian inference. The experiments on image generation over two tasks (MNIST and CeleA) demonstrate the superiority of the proposed VGAN to the variational autoencoder, the standard GAN and the Bayesian GAN based on the sampling method. The learning efficiency and generation performance are evaluated.},\n  keywords = {Bayes methods;belief networks;computer vision;inference mechanisms;learning (artificial intelligence);neural nets;sampling methods;variational Bayesian GAN;generative adversarial network;joint optimization;model regularization;gradient vanishing;variational Bayesian inference;image generation;variational autoencoder;learning efficiency;weight uncertainty;sampling method;VGAN;Gallium nitride;Generative adversarial networks;Generators;Bayes methods;Data models;Uncertainty;Training;generative adversarial networks;Bayesian learning;variational autoencoder;computer vision},\n  doi = {10.23919/EUSIPCO.2019.8903084},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533144.pdf},\n}\n\n
\n
\n\n\n
\n Generative adversarial network (GAN) has been successfully developing as a generative model where the artificial data drawn from the generator are misrecognized as real samples by a discriminator. Although GAN achieves the desirable performance, the challenge is that the mode collapse easily happens in the joint optimization of generator and discriminator. This study copes with this challenge by improving the model regularization by means of representing the weight uncertainty in GAN. A new Bayesian GAN is formulated and implemented to learn a regularized model from diverse data where the strong modes are flattened via the marginalization and the issues of model collapse and gradient vanishing are alleviated. In particular, we present a variational GAN (VGAN) where the encoder, generator and discriminator are jointly estimated according to the variational Bayesian inference. The experiments on image generation over two tasks (MNIST and CeleA) demonstrate the superiority of the proposed VGAN to the variational autoencoder, the standard GAN and the Bayesian GAN based on the sampling method. The learning efficiency and generation performance are evaluated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Investigation of Reverse Mode Loudspeaker Performance in Urban Sound Classification.\n \n \n \n \n\n\n \n Kalmar, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"InvestigationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903085,\n  author = {G. Kalmar},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Investigation of Reverse Mode Loudspeaker Performance in Urban Sound Classification},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The loudspeaker is a transducer that converts electrical signals to sound. However, it is well-known that in reverse mode, it can convert sound to an electrical signal. In this paper, the reverse mode behavior is investigated through the analysis of its influence on urban sound classification accuracy by comparing the results of deep learning based classifiers. As no audio datasets recorded by loudspeakers are available, a popular traditional dataset was used and transformed into forms as they would have been recorded by reverse mode speakers. These transformations simulated the loudspeakers' electrical responses to acoustical excitation signals based on their reverse mode transfer functions, which were derived from equivalent mechanical circuits. The details of this reverse mode modeling are also included. The transformed datasets were used during the trainings of the classifiers, and the effects of different speaker parameters and noise levels were examined and compared. The results showed that smaller, full-range speakers performed better than bigger woofers. The types of well-classified events revealed that loud, impulsive events could be classified more accurately.},\n  keywords = {acoustic signal processing;audio signal processing;equivalent circuits;learning (artificial intelligence);loudspeakers;signal classification;transfer functions;acoustical excitation signals;reverse mode transfer functions;reverse mode modeling;transformed datasets;speaker parameters;reverse mode loudspeaker performance;electrical signal;reverse mode behavior;urban sound classification accuracy;deep learning based classifiers;reverse mode speakers;loudspeakers;transducer;loudspeaker electrical responses;noise levels;equivalent mechanical circuits;Loudspeakers;Impedance;Transfer functions;Integrated circuit modeling;Equivalent circuits;Microphones;Force;loudspeaker;reverse mode;sound classification},\n  doi = {10.23919/EUSIPCO.2019.8903085},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533648.pdf},\n}\n\n
\n
\n\n\n
\n The loudspeaker is a transducer that converts electrical signals to sound. However, it is well-known that in reverse mode, it can convert sound to an electrical signal. In this paper, the reverse mode behavior is investigated through the analysis of its influence on urban sound classification accuracy by comparing the results of deep learning based classifiers. As no audio datasets recorded by loudspeakers are available, a popular traditional dataset was used and transformed into forms as they would have been recorded by reverse mode speakers. These transformations simulated the loudspeakers' electrical responses to acoustical excitation signals based on their reverse mode transfer functions, which were derived from equivalent mechanical circuits. The details of this reverse mode modeling are also included. The transformed datasets were used during the trainings of the classifiers, and the effects of different speaker parameters and noise levels were examined and compared. The results showed that smaller, full-range speakers performed better than bigger woofers. The types of well-classified events revealed that loud, impulsive events could be classified more accurately.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Continuum Model for Route Optimization in Large-Scale Inhomogeneous Multi-Hop Wireless Networks.\n \n \n \n \n\n\n \n Hedges, D. A.; Coon, J. P.; and Chen, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903086,\n  author = {D. A. Hedges and J. P. Coon and G. Chen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Continuum Model for Route Optimization in Large-Scale Inhomogeneous Multi-Hop Wireless Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Multi-hop route optimization in large-scale inhomogeneous networks typically requires the use of constrained optimization tools, yielding processing complexity that scales like O(N3), N being the number of relays employed. Here, we propose an alternative approach to route optimization by considering the limit of infinite relay node density to develop a continuum model, which yields an optimized equivalent continuous relay path. The model is carefully constructed to maintain a constant connection density even though the node density scales without bound. This leads to a formulation for minimizing the end-to-end outage probability that can be solved using methods from the calculus of variations. We demonstrate the effectiveness of this new approach and its potential for reducing processing complexity by considering a network subjected to a point source of interference.},\n  keywords = {communication complexity;minimisation;probability;radio networks;radiofrequency interference;telecommunication network routing;continuum model;large-scale inhomogeneous multihop wireless networks;multihop route optimization;constrained optimization tools;infinite relay node density;constant connection density;node density scales;equivalent continuous relay path optimization;end-to-end outage probability minimization;interference;Power system reliability;Probability;Relays;Interference;Optimization;Nonhomogeneous media;Wireless networks;Multi-hop relaying;calculus of variations;outage;continuum modeling},\n  doi = {10.23919/EUSIPCO.2019.8903086},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532621.pdf},\n}\n\n
\n
\n\n\n
\n Multi-hop route optimization in large-scale inhomogeneous networks typically requires the use of constrained optimization tools, yielding processing complexity that scales like O(N3), N being the number of relays employed. Here, we propose an alternative approach to route optimization by considering the limit of infinite relay node density to develop a continuum model, which yields an optimized equivalent continuous relay path. The model is carefully constructed to maintain a constant connection density even though the node density scales without bound. This leads to a formulation for minimizing the end-to-end outage probability that can be solved using methods from the calculus of variations. We demonstrate the effectiveness of this new approach and its potential for reducing processing complexity by considering a network subjected to a point source of interference.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Unifying Framework for Blind Source Separation Based on A Joint Diagonalizability Constraint.\n \n \n \n \n\n\n \n Ikeshita, R.; Ito, N.; Nakatani, T.; and Sawada, H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903087,\n  author = {R. Ikeshita and N. Ito and T. Nakatani and H. Sawada},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Unifying Framework for Blind Source Separation Based on A Joint Diagonalizability Constraint},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We present a unifying framework for dealing with convolutive blind source separation (BSS), which fully models inter-channel, inter-frequency, and inter-frame correlation of sources by latent covariance matrices subject to a joint diagonalizability constraint. The framework is shown to encompass as its specific realizations a variety of standard BSS and dereverberation methods that have been developed independently, including frequency-domain independent component analysis (FDICA), fast full-rank spatial covariance analysis (FastFCA), and weighted prediction error (WPE). This gives a unified view of conventional methods and a systematic way of deriving new BSS methods. A BSS experiment on speech mixtures showed improved separation performance of a proposed method compared to the state-of-the-art independent low-rank matrix analysis.},\n  keywords = {blind source separation;convolution;covariance analysis;covariance matrices;frequency-domain analysis;independent component analysis;speech processing;models inter-channel;inter-frequency;latent covariance matrices subject;joint diagonalizability constraint;specific realizations;standard BSS;dereverberation methods;frequency-domain independent component analysis;full-rank spatial covariance analysis;BSS methods;BSS experiment;separation performance;low-rank matrix analysis;unifying framework;convolutive blind source separation;Covariance matrices;Correlation;Blind source separation;Optimization;Analytical models;Tensors;Decorrelation;Blind source separation;joint diagonalization;independent component analysis;dereverberation},\n  doi = {10.23919/EUSIPCO.2019.8903087},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533452.pdf},\n}\n\n
\n
\n\n\n
\n We present a unifying framework for dealing with convolutive blind source separation (BSS), which fully models inter-channel, inter-frequency, and inter-frame correlation of sources by latent covariance matrices subject to a joint diagonalizability constraint. The framework is shown to encompass as its specific realizations a variety of standard BSS and dereverberation methods that have been developed independently, including frequency-domain independent component analysis (FDICA), fast full-rank spatial covariance analysis (FastFCA), and weighted prediction error (WPE). This gives a unified view of conventional methods and a systematic way of deriving new BSS methods. A BSS experiment on speech mixtures showed improved separation performance of a proposed method compared to the state-of-the-art independent low-rank matrix analysis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Analysis of Parkinson’s Disease Dysgraphia Based on Optimized Fractional Order Derivative Features.\n \n \n \n\n\n \n Mucha, J.; Faundez-Zanuy, M.; Mekyska, J.; Zvoncak, V.; Galaz, Z.; Kiska, T.; Smekal, Z.; Brabenec, L.; Rektorova, I.; and Lopez-de-Ipina, K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903088,\n  author = {J. Mucha and M. Faundez-Zanuy and J. Mekyska and V. Zvoncak and Z. Galaz and T. Kiska and Z. Smekal and L. Brabenec and I. Rektorova and K. Lopez-de-Ipina},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Analysis of Parkinson’s Disease Dysgraphia Based on Optimized Fractional Order Derivative Features},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Parkinson's disease (PD) is a common neurodegenerative disorder with prevalence rate estimated to 1.5 % for people age over 65 years. The majority of PD patients is associated with handwriting abnormalities called PD dysgraphia, which is linked with rigidity and bradykinesia of muscles involved in the handwriting process. One of the effective approaches of quantitative PD dysgraphia analysis is based on online handwriting processing. In the frame of this study we aim to deeply evaluate and optimize advanced PD handwriting quantification based on fractional order derivatives (FD). For this purpose, we used 37 PD patients and 38 healthy controls from the PaHaW (PD handwriting database). The FD based features were employed in classification and regression analysis (using gradient boosted trees), and evaluated in terms of their discrimination power and abilities to assess severity of PD. The results suggest that the most discriminative and descriptive information provide FD based features extracted from a repetitive loop task or a sentence copy task (maximum sensitivity/specificity = 76 %, error in severity assessment = 14 %, error in PD duration estimation = 22 %). Next, we identified two optimal ranges for the order of fractional derivative, α = 0.05-0.45 and α = 0.65-0.80. Finally, we observed that inclusion of pressure, azimuth, and tilt together with kinematic features into mathematical modeling has no influence (positive or negative) on classification performance, however, there was a notable improvement in the estimation of PD duration.},\n  keywords = {biomechanics;diseases;feature extraction;medical disorders;muscle;neurophysiology;patient diagnosis;physiological models;regression analysis;shear modulus;trees (mathematics);classification performance;regression analysis;discriminative information;descriptive information;PD duration estimation;kinematic features;optimized fractional order derivative features;neurodegenerative disorder;prevalence rate;people age;handwriting abnormalities;muscle rigidity;handwriting process;quantitative PD dysgraphia analysis;online handwriting processing;advanced PD handwriting quantification;PD patients;healthy controls;PD handwriting database;Parkinson's disease dysgraphia;bradykinesia;FD based feature extraction;gradient boosted trees;mathematical modeling;Task analysis;Feature extraction;Kinematics;Parkinson's disease;Databases;Regression analysis;Europe;online handwriting;Parkinson’s disease;dysgraphia;fractal calculus;fractional derivatives;classification;regression},\n  doi = {10.23919/EUSIPCO.2019.8903088},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Parkinson's disease (PD) is a common neurodegenerative disorder with prevalence rate estimated to 1.5 % for people age over 65 years. The majority of PD patients is associated with handwriting abnormalities called PD dysgraphia, which is linked with rigidity and bradykinesia of muscles involved in the handwriting process. One of the effective approaches of quantitative PD dysgraphia analysis is based on online handwriting processing. In the frame of this study we aim to deeply evaluate and optimize advanced PD handwriting quantification based on fractional order derivatives (FD). For this purpose, we used 37 PD patients and 38 healthy controls from the PaHaW (PD handwriting database). The FD based features were employed in classification and regression analysis (using gradient boosted trees), and evaluated in terms of their discrimination power and abilities to assess severity of PD. The results suggest that the most discriminative and descriptive information provide FD based features extracted from a repetitive loop task or a sentence copy task (maximum sensitivity/specificity = 76 %, error in severity assessment = 14 %, error in PD duration estimation = 22 %). Next, we identified two optimal ranges for the order of fractional derivative, α = 0.05-0.45 and α = 0.65-0.80. Finally, we observed that inclusion of pressure, azimuth, and tilt together with kinematic features into mathematical modeling has no influence (positive or negative) on classification performance, however, there was a notable improvement in the estimation of PD duration.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Ad-hoc mobile array based audio segmentation using latent variable stochastic model.\n \n \n \n \n\n\n \n Chetupalli, S. R.; Bhowmick, A.; and Sreenivas, T. V.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Ad-hocPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903089,\n  author = {S. R. Chetupalli and A. Bhowmick and T. V. Sreenivas},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Ad-hoc mobile array based audio segmentation using latent variable stochastic model},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Segmentation/diarization of audio recordings using a network of ad-hoc mobile arrays and the spatial information gathered is a part of acoustic scene analysis. In this ad-hoc mobile array network, we assume fine (sample level) synchronization of the signals only at each mobile node and a gross synchronization (frame level) across different nodes is sufficient. We compute spatial features at each node in a distributed manner without the overhead of signal data aggregation between mobile devices. The spatial features are then modeled jointly using a Dirichlet mixture model, and the posterior probabilities of the mixture components are used to derive the segmentation information. Experiments on real life recordings in a reverberant room using a network of randomly placed mobile phones has shown a diarization error rate of less than 14% even with overlapped talkers.},\n  keywords = {array signal processing;audio recording;audio signal processing;data aggregation;Gaussian processes;mixture models;mobile ad hoc networks;mobile handsets;probability;reverberation;stochastic processes;synchronisation;audio segmentation;latent variable stochastic model;spatial information;acoustic scene analysis;ad-hoc mobile array network;signal synchronization;sample level;mobile node;gross synchronization;frame level;spatial features;mobile devices;Dirichlet mixture model;segmentation information;mobile phones;audio recording segmentation;audio recording diarization;posterior probabilities;real life recordings;reverberant room;diarization error rate;signal data aggregation;Microphones;Computational modeling;Stochastic processes;Mobile handsets;Ad hoc networks;Audio recording;Mixture models;Diarization;Dirichlet distribution;steered response power;acoustic sensor network;mobile devices},\n  doi = {10.23919/EUSIPCO.2019.8903089},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533439.pdf},\n}\n\n
\n
\n\n\n
\n Segmentation/diarization of audio recordings using a network of ad-hoc mobile arrays and the spatial information gathered is a part of acoustic scene analysis. In this ad-hoc mobile array network, we assume fine (sample level) synchronization of the signals only at each mobile node and a gross synchronization (frame level) across different nodes is sufficient. We compute spatial features at each node in a distributed manner without the overhead of signal data aggregation between mobile devices. The spatial features are then modeled jointly using a Dirichlet mixture model, and the posterior probabilities of the mixture components are used to derive the segmentation information. Experiments on real life recordings in a reverberant room using a network of randomly placed mobile phones has shown a diarization error rate of less than 14% even with overlapped talkers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Single Image Ear Recognition Using Wavelet-Based Multi-Band PCA.\n \n \n \n \n\n\n \n Zarachoff, M.; Sheikh-Akbari, A.; and Monekosso, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-4, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SinglePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903090,\n  author = {M. Zarachoff and A. Sheikh-Akbari and D. Monekosso},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Single Image Ear Recognition Using Wavelet-Based Multi-Band PCA},\n  year = {2019},\n  pages = {1-4},\n  abstract = {Principal Component Analysis (PCA) has been successfully used for many applications, including ear recognition. This paper presents a 2D Wavelet based Multi-Band PCA (2DWMBPCA) method, inspired by PCA based techniques for multispectral and hyperspectral images, which have shown a significantly higher performance to that of standard PCA. The proposed method performs 2D non-decimated wavelet transform on the input image dividing the image into its subbands. It then splits each resulting subband into a number of bands evenly based on the coefficient values. Standard PCA is then applied on each resulting set of bands to extract the subbands eigenvectors, which are used as features for matching. Experimental results on images of two benchmark ear image datasets show that the proposed 2D-WMBPCA significantly outperforms both the standard PCA method and the eigenfaces method.},\n  keywords = {ear;eigenvalues and eigenfunctions;face recognition;feature extraction;image matching;principal component analysis;wavelet transforms;PCA based techniques;multispectral images;hyperspectral images;input image;benchmark ear image datasets;2D-WMBPCA;standard PCA method;single image ear recognition;Principal Component Analysis;2D Wavelet based MultiBand PCA method;Principal component analysis;Ear;Two dimensional displays;Standards;Feature extraction;Image recognition;Wavelet analysis;Ear recognition;principal component analysis;multi-band image creation;non-decimated wavelet transform},\n  doi = {10.23919/EUSIPCO.2019.8903090},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533816.pdf},\n}\n\n
\n
\n\n\n
\n Principal Component Analysis (PCA) has been successfully used for many applications, including ear recognition. This paper presents a 2D Wavelet based Multi-Band PCA (2DWMBPCA) method, inspired by PCA based techniques for multispectral and hyperspectral images, which have shown a significantly higher performance to that of standard PCA. The proposed method performs 2D non-decimated wavelet transform on the input image dividing the image into its subbands. It then splits each resulting subband into a number of bands evenly based on the coefficient values. Standard PCA is then applied on each resulting set of bands to extract the subbands eigenvectors, which are used as features for matching. Experimental results on images of two benchmark ear image datasets show that the proposed 2D-WMBPCA significantly outperforms both the standard PCA method and the eigenfaces method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cauchy Multichannel Speech Enhancement with a Deep Speech Prior.\n \n \n \n \n\n\n \n Fontaine, M.; Nugraha, A. A.; Badeau, R.; Yoshii, K.; and Liutkus, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CauchyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903091,\n  author = {M. Fontaine and A. A. Nugraha and R. Badeau and K. Yoshii and A. Liutkus},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Cauchy Multichannel Speech Enhancement with a Deep Speech Prior},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We propose a semi-supervised multichannel speech enhancement system based on a probabilistic model which assumes that both speech and noise follow the heavy-tailed multi-variate complex Cauchy distribution. As we advocate, this allows handling strong and adverse noisy conditions. Consequently, the model is parameterized by the source magnitude spectrograms and the source spatial scatter matrices. To deal with the non-additivity of scatter matrices, our first contribution is to perform the enhancement on a projected space. Then, our second contribution is to combine a latent variable model for speech, which is trained by following the variational autoencoder framework, with a low-rank model for the noise source. At test time, an iterative inference algorithm is applied, which produces estimated parameters to use for separation. The speech latent variables are estimated first from the noisy speech and then updated by a gradient descent method, while a majoriation-equalization strategy is used to update both the noise and the spatial parameters of both sources. Our experimental results show that the Cauchy model outperforms the state-of-art methods. The standard deviation scores also reveal that the proposed method is more robust against non-stationary noise.},\n  keywords = {gradient methods;inference mechanisms;learning (artificial intelligence);matrix algebra;speech enhancement;statistical distributions;latent variable model;variational autoencoder framework;low-rank model;noise source;iterative inference algorithm;speech latent variables;noisy speech;nonstationary noise;Cauchy multichannel speech enhancement;semisupervised multichannel speech enhancement;probabilistic model;source magnitude spectrograms;source spatial scatter matrices;heavy-tailed multivariate complex Cauchy distribution;gradient descent method;majoriation-equalization strategy;standard deviation scores;Speech enhancement;Computational modeling;Spectrogram;Decoding;Probabilistic logic;Noise measurement;Training;Multichannel speech enhancement;multivariate complex Cauchy distribution;variational autoencoder;nonnegative matrix factorization},\n  doi = {10.23919/EUSIPCO.2019.8903091},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533890.pdf},\n}\n\n
\n
\n\n\n
\n We propose a semi-supervised multichannel speech enhancement system based on a probabilistic model which assumes that both speech and noise follow the heavy-tailed multi-variate complex Cauchy distribution. As we advocate, this allows handling strong and adverse noisy conditions. Consequently, the model is parameterized by the source magnitude spectrograms and the source spatial scatter matrices. To deal with the non-additivity of scatter matrices, our first contribution is to perform the enhancement on a projected space. Then, our second contribution is to combine a latent variable model for speech, which is trained by following the variational autoencoder framework, with a low-rank model for the noise source. At test time, an iterative inference algorithm is applied, which produces estimated parameters to use for separation. The speech latent variables are estimated first from the noisy speech and then updated by a gradient descent method, while a majoriation-equalization strategy is used to update both the noise and the spatial parameters of both sources. Our experimental results show that the Cauchy model outperforms the state-of-art methods. The standard deviation scores also reveal that the proposed method is more robust against non-stationary noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Random Gabor Multipliers for Compressive Sensing: A Simulation Study.\n \n \n \n \n\n\n \n Rajbamshi, S.; Tauböck, G.; Balazs, P.; and Abreu, L. D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RandomPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903092,\n  author = {S. Rajbamshi and G. Tauböck and P. Balazs and L. D. Abreu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Random Gabor Multipliers for Compressive Sensing: A Simulation Study},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we analyze by means of simulations the applicability of random Gabor multipliers as compressive measurements. In particular, we consider signals that are sparse with respect to Fourier or Gabor dictionaries, i.e., signals that are sparse in frequency or time-frequency domains. This work is an extension of our earlier contribution, where we introduced random Gabor multipliers to compress signals that are sparse in time domain. As reconstruction technique we employ the well known ℓ1-minimization procedure. Finally, we evaluate the compression performance of random Gabor multipliers by applying them to a specific audio signal with inherent time-frequency sparsity. Our results highlight the strong potential of random Gabor multipliers for present and future real-world audio applications.},\n  keywords = {audio signal processing;compressed sensing;Fourier analysis;Gabor filters;minimisation;random processes;time-frequency analysis;random Gabor multipliers;signal compression;compressive sensing;simulation study;compressive measurements;Gabor dictionaries;Fourierdictionaries;ℓ1-minimization procedure;audio signal;time-frequency domains;frequency domains;time domain;time-frequency sparsity;Sparse matrices;Time-frequency analysis;Dictionaries;Stochastic processes;Standards;Gaussian distribution;Numerical models;Compressive Sensing;Gabor Multiplier;Random Matrix;Dictionary;Audio},\n  doi = {10.23919/EUSIPCO.2019.8903092},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533764.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we analyze by means of simulations the applicability of random Gabor multipliers as compressive measurements. In particular, we consider signals that are sparse with respect to Fourier or Gabor dictionaries, i.e., signals that are sparse in frequency or time-frequency domains. This work is an extension of our earlier contribution, where we introduced random Gabor multipliers to compress signals that are sparse in time domain. As reconstruction technique we employ the well known ℓ1-minimization procedure. Finally, we evaluate the compression performance of random Gabor multipliers by applying them to a specific audio signal with inherent time-frequency sparsity. Our results highlight the strong potential of random Gabor multipliers for present and future real-world audio applications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Fast Local Mode Decision for the HEVC Intra Prediction Based on Direction Detection.\n \n \n \n \n\n\n \n Corrêa, M.; Zatt, B.; Palomino, D.; Corrêa, G.; and Agostini, L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903093,\n  author = {M. Corrêa and B. Zatt and D. Palomino and G. Corrêa and L. Agostini},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Fast Local Mode Decision for the HEVC Intra Prediction Based on Direction Detection},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This work presents a fast local mode decision for the HEVC intra prediction based on a direction detection algorithm first proposed in the Daala video format and currently used in the AV1 deringing filter. The main objective is to reduce the number of intra candidates entering the very expensive RDO loop by locally detecting the dominant edge direction of the original block, without the need of computing any of the intra prediction modes and associated RD costs. The proposed method was implemented on the latest HEVC reference encoder (HM 16.20), replacing the original local mode decision by the algorithm proposed in this paper. Experiments under the HEVC common test conditions, including UHD 4K (3840x2160 pixels) test sequences, showed, on average, 28.2% encoding time reduction, at a cost of 1.0% BD-BR (YUV) increase. Specifically for UHD 4K sequences, the experiments showed, on average, 30.0% encoding time reduction, at cost of 0.9% BD-BR (YUV) increase.},\n  keywords = {image filtering;image resolution;object detection;video coding;HEVC common test conditions;UHD 4K test sequences;fast local mode decision;HEVC intra prediction;direction detection algorithm;Daala video format;AV1 deringing filter;dominant edge direction;intra prediction modes;HEVC reference encoder;Prediction algorithms;Streaming media;Encoding;Image edge detection;Video coding;Detection algorithms;Filtering algorithms;video coding;HEVC;intra prediction;fast mode decision},\n  doi = {10.23919/EUSIPCO.2019.8903093},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530385.pdf},\n}\n\n
\n
\n\n\n
\n This work presents a fast local mode decision for the HEVC intra prediction based on a direction detection algorithm first proposed in the Daala video format and currently used in the AV1 deringing filter. The main objective is to reduce the number of intra candidates entering the very expensive RDO loop by locally detecting the dominant edge direction of the original block, without the need of computing any of the intra prediction modes and associated RD costs. The proposed method was implemented on the latest HEVC reference encoder (HM 16.20), replacing the original local mode decision by the algorithm proposed in this paper. Experiments under the HEVC common test conditions, including UHD 4K (3840x2160 pixels) test sequences, showed, on average, 28.2% encoding time reduction, at a cost of 1.0% BD-BR (YUV) increase. Specifically for UHD 4K sequences, the experiments showed, on average, 30.0% encoding time reduction, at cost of 0.9% BD-BR (YUV) increase.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Extending Architecture Modeling for Signal Processing towards GPUs.\n \n \n \n \n\n\n \n Payvar, S.; Boutellier, J.; Morvan, A.; Rubattu, C.; and Pelcat, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ExtendingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903094,\n  author = {S. Payvar and J. Boutellier and A. Morvan and C. Rubattu and M. Pelcat},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Extending Architecture Modeling for Signal Processing towards GPUs},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Efficient usage of heterogeneous computing architectures requires distribution of the workload to available processing elements. Traditionally, this mapping is done based on information acquired from application profiling. To reduce the high amount of manual work related to mapping, statistical application and architecture modeling can be applied for automating mapping exploration. Application modeling has been studied extensively, whereas architecture modeling has received less attention. Originally developed for signal processing systems, Linear System Level Architecture (LSLA) is the first architecture modeling approach that clearly distinguishes the underlying computation hardware from software. Up to now, LSLA has covered the modeling of multicore CPUs. This work proposes extending the LSLA model with GPU support, by including the notion of parallelism. The proposed GPU modeling extension is evaluated by performance estimation of three signal processing applications with various workload distributions on a desktop GPU, and a mobile GPU. The measured average fidelity of the proposed model is 93%.},\n  keywords = {graphics processing units;multiprocessing systems;signal processing;heterogeneous computing architectures;application profiling;statistical application;automating mapping exploration;architecture modeling;LSLA model;GPU modeling extension;signal processing applications;linear system level architecture;Graphics processing units;Computational modeling;Mathematical model;Signal processing;Computer architecture;Cost function;Parallel processing;modeling;architecture;design space exploration;signal processing systems},\n  doi = {10.23919/EUSIPCO.2019.8903094},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570527728.pdf},\n}\n\n
\n
\n\n\n
\n Efficient usage of heterogeneous computing architectures requires distribution of the workload to available processing elements. Traditionally, this mapping is done based on information acquired from application profiling. To reduce the high amount of manual work related to mapping, statistical application and architecture modeling can be applied for automating mapping exploration. Application modeling has been studied extensively, whereas architecture modeling has received less attention. Originally developed for signal processing systems, Linear System Level Architecture (LSLA) is the first architecture modeling approach that clearly distinguishes the underlying computation hardware from software. Up to now, LSLA has covered the modeling of multicore CPUs. This work proposes extending the LSLA model with GPU support, by including the notion of parallelism. The proposed GPU modeling extension is evaluated by performance estimation of three signal processing applications with various workload distributions on a desktop GPU, and a mobile GPU. The measured average fidelity of the proposed model is 93%.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compressive Independent Component Analysis.\n \n \n \n \n\n\n \n Sheehan, M. P.; Kotzagiannidis, M. S.; and Davies, M. E.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CompressivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903095,\n  author = {M. P. Sheehan and M. S. Kotzagiannidis and M. E. Davies},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Compressive Independent Component Analysis},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper we investigate the minimal dimension statistic necessary in order to solve the independent component analysis (ICA) problem. We create a compressive learning framework for ICA and show for the first time that the memory complexity scales only quadratically with respect to the number of independent sources n, resulting in a vast improvement over other ICA methods. This is made possible by demonstrating a low dimensional model set, that exists in the cumulant based ICA problem, can be stably embedded into a compressed space from a larger dimensional cumulant tensor space. We show that identifying independent source signals can be achieved with high probability when the compression size m is of the optimal order of the intrinsic dimension of the ICA parameters and propose a iterative projection gradient algorithm to achieve this.},\n  keywords = {gradient methods;independent component analysis;learning (artificial intelligence);signal processing;statistics;tensors;compressive independent component analysis;compressive learning framework;memory complexity scales;ICA methods;larger dimensional cumulant tensor space;independent source signals;minimal dimension statistic;Tensors;Complexity theory;Signal processing algorithms;Mathematical model;Analytical models;Covariance matrices;Principal component analysis;Compressive learning;random moments;compressive sensing;independent component analysis;statistical learning},\n  doi = {10.23919/EUSIPCO.2019.8903095},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534054.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we investigate the minimal dimension statistic necessary in order to solve the independent component analysis (ICA) problem. We create a compressive learning framework for ICA and show for the first time that the memory complexity scales only quadratically with respect to the number of independent sources n, resulting in a vast improvement over other ICA methods. This is made possible by demonstrating a low dimensional model set, that exists in the cumulant based ICA problem, can be stably embedded into a compressed space from a larger dimensional cumulant tensor space. We show that identifying independent source signals can be achieved with high probability when the compression size m is of the optimal order of the intrinsic dimension of the ICA parameters and propose a iterative projection gradient algorithm to achieve this.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n ESPRIT Angles-of-Arrival Estimation with Missing Sensor Data.\n \n \n \n \n\n\n \n White, L. B.; and Jackson, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ESPRITPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903096,\n  author = {L. B. White and T. Jackson},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {ESPRIT Angles-of-Arrival Estimation with Missing Sensor Data},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper describes a procedure for Angles-of-Arrival (AoA) estimation for a uniform linear array (ULA) with missing sensors. The novelty of the approach is that, rather than using AoA estimates obtained from contiguous subarrays, we use estimates for the corresponding signal subspaces. The report shows that the ESPRIT invariance equations for each contiguous subarray define an operator that “propagates” the signal subspace beyond the physical array. Care needs to be taken to ensure the same bases are used for each subarray. The estimates of the signal subspaces for the missing array elements are appropriately combined to yield an estimate of the signal subspace for the complete ULA. The paper only addressed the case of one missing sensor, but the approach can be readily generalised. Simulations show that the proposed method yields AoA estimates which are very close to those obtained if there was no missing sensor, in contrast to the case where the measurements from the missing sensor were zero. In order to appropriately combine the estimates for the missing signal subspace terms, the report assessed the accuracy of signal subspace estimation as a function of the number N of ULA elements. Simulations indicate that, in the examples considered, the variance of these estimates decreases only as N1/3 which is surprising given that the variance of AoA estimates (at least for one source) decrease as N3. The paper suggests that further study of this empirical result is warranted.},\n  keywords = {array signal processing;direction-of-arrival estimation;ESPRIT invariance equations;AoA estimates;missing signal subspace terms;signal subspace estimation;ESPRIT Angles-of-Arrival estimation;missing sensor data;uniform linear array;signal subspaces;Sensor arrays;Covariance matrices;Array signal processing;Eigenvalues and eigenfunctions;Estimation;Sensor array signal processing;subspace methods.},\n  doi = {10.23919/EUSIPCO.2019.8903096},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529442.pdf},\n}\n\n
\n
\n\n\n
\n This paper describes a procedure for Angles-of-Arrival (AoA) estimation for a uniform linear array (ULA) with missing sensors. The novelty of the approach is that, rather than using AoA estimates obtained from contiguous subarrays, we use estimates for the corresponding signal subspaces. The report shows that the ESPRIT invariance equations for each contiguous subarray define an operator that “propagates” the signal subspace beyond the physical array. Care needs to be taken to ensure the same bases are used for each subarray. The estimates of the signal subspaces for the missing array elements are appropriately combined to yield an estimate of the signal subspace for the complete ULA. The paper only addressed the case of one missing sensor, but the approach can be readily generalised. Simulations show that the proposed method yields AoA estimates which are very close to those obtained if there was no missing sensor, in contrast to the case where the measurements from the missing sensor were zero. In order to appropriately combine the estimates for the missing signal subspace terms, the report assessed the accuracy of signal subspace estimation as a function of the number N of ULA elements. Simulations indicate that, in the examples considered, the variance of these estimates decreases only as N1/3 which is surprising given that the variance of AoA estimates (at least for one source) decrease as N3. The paper suggests that further study of this empirical result is warranted.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive Localized Cayley Parametrization Technique for Smooth optimization over the Stiefel Manifold.\n \n \n \n \n\n\n \n Kume, K.; and Yamada, I.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903097,\n  author = {K. Kume and I. Yamada},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive Localized Cayley Parametrization Technique for Smooth optimization over the Stiefel Manifold},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We propose a novel computational strategy, named the adaptive localized Cayley parametrization technique for acceleration of optimization over the Stiefel manifold. The proposed optimization algorithm is designed as a gradient descent type scheme for the composite of the original cost function and the inverse of the localized Cayley transform defined on the vector space of all skew-symmetric matrices. Thanks to the adaptive localized Cayley transform which is a computable diffeomorphism between the orthogonal group and the vector space of the skew-symmetric matrices, the proposed algorithm (i) is free from the singularity issue, which can cause performance degradation, observed in the dual Cayley parametrization technique [Yamada- Ezaki'03] as well as (ii) can enjoy powerful arts for acceleration on the vector space without suffering from the nonlinear nature of the Stiefel manifold. We also present a convergence analysis, for the prototype algorithm employing the Armijo's rule, that shows the gradient of the composite function at zero in the range space of the localized Cayley transform is guaranteed to converge to zero. Numerical experiments show excellent performance compared with major optimization algorithms designed essentially with retractions on the tangent space of the Stiefel manifold [Absil-Mahony-Sepulcher'08, Wen-Yin'13].},\n  keywords = {convergence of numerical methods;gradient methods;optimisation;transforms;vectors;vector space;skew-symmetric matrices;dual Cayley parametrization technique;Stiefel manifold;localized Cayley transform;adaptive localized Cayley parametrization technique;smooth optimization;gradient descent type scheme;original cost function;orthogonal group;performance degradation;convergence analysis;prototype algorithm;Armijo's rule;composite function;Optimization;Acceleration;Transforms;Manifolds;Signal processing algorithms;Newton method;Convergence;Stiefel manifold optimization;orthogonal group optimization;Riemannian manifold optimization;Cayley transform;Anderson acceleration},\n  doi = {10.23919/EUSIPCO.2019.8903097},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529270.pdf},\n}\n\n
\n
\n\n\n
\n We propose a novel computational strategy, named the adaptive localized Cayley parametrization technique for acceleration of optimization over the Stiefel manifold. The proposed optimization algorithm is designed as a gradient descent type scheme for the composite of the original cost function and the inverse of the localized Cayley transform defined on the vector space of all skew-symmetric matrices. Thanks to the adaptive localized Cayley transform which is a computable diffeomorphism between the orthogonal group and the vector space of the skew-symmetric matrices, the proposed algorithm (i) is free from the singularity issue, which can cause performance degradation, observed in the dual Cayley parametrization technique [Yamada- Ezaki'03] as well as (ii) can enjoy powerful arts for acceleration on the vector space without suffering from the nonlinear nature of the Stiefel manifold. We also present a convergence analysis, for the prototype algorithm employing the Armijo's rule, that shows the gradient of the composite function at zero in the range space of the localized Cayley transform is guaranteed to converge to zero. Numerical experiments show excellent performance compared with major optimization algorithms designed essentially with retractions on the tangent space of the Stiefel manifold [Absil-Mahony-Sepulcher'08, Wen-Yin'13].\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Target Identification from Coded Diffraction Patterns via Template Matching.\n \n \n \n \n\n\n \n Jerez, A.; Pinilla, S.; Garcia, H.; and Arguello, H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TargetPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903098,\n  author = {A. Jerez and S. Pinilla and H. Garcia and H. Arguello},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Target Identification from Coded Diffraction Patterns via Template Matching},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Traditional target detection techniques have been developed using measurements from the object acquired by optical systems that are only able to measure its intensity, losing its optical phase information. The optical phase, for instance, allows describing the shape and depth of an object. This work proposes a target identification methodology that operates over measurements acquired through an optical system that collects coded diffraction patterns (CDP). In contrast to traditional detection techniques, the proposed methodology is able to incorporate the optical phase information of an object as a discriminant in the target detection task. The proposed methodology consists of two steps: first, an estimation of the scene from the acquired CDP is accomplished, second, a scanning procedure with a reference pattern over the estimated scene is performed. Numerical results show that the phase information can be used as an identification discriminant for target detection. Also, simulations demonstrate that the proposed methodology is able to identify a target under highly noisy scenarios using one single snapshot with a success rate up of 84%. Furthermore, it is worth to mention that to the best of our knowledge this is the first methodology that uses the optical phase of an object as a target identification discriminant.},\n  keywords = {object detection;coded diffraction patterns;target detection techniques;optical system;optical phase information;acquired CDP;target identification discriminant;template matching;Optical diffraction;Optical variables measurement;Biomedical optical imaging;Correlation;Optical signal processing;Phase measurement;Optical imaging},\n  doi = {10.23919/EUSIPCO.2019.8903098},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533647.pdf},\n}\n\n
\n
\n\n\n
\n Traditional target detection techniques have been developed using measurements from the object acquired by optical systems that are only able to measure its intensity, losing its optical phase information. The optical phase, for instance, allows describing the shape and depth of an object. This work proposes a target identification methodology that operates over measurements acquired through an optical system that collects coded diffraction patterns (CDP). In contrast to traditional detection techniques, the proposed methodology is able to incorporate the optical phase information of an object as a discriminant in the target detection task. The proposed methodology consists of two steps: first, an estimation of the scene from the acquired CDP is accomplished, second, a scanning procedure with a reference pattern over the estimated scene is performed. Numerical results show that the phase information can be used as an identification discriminant for target detection. Also, simulations demonstrate that the proposed methodology is able to identify a target under highly noisy scenarios using one single snapshot with a success rate up of 84%. Furthermore, it is worth to mention that to the best of our knowledge this is the first methodology that uses the optical phase of an object as a target identification discriminant.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n WGANSing: A Multi-Voice Singing Voice Synthesizer Based on the Wasserstein-GAN.\n \n \n \n \n\n\n \n Chandna, P.; Blaauw, M.; Bonada, J.; and Gómez, E.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"WGANSing:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903099,\n  author = {P. Chandna and M. Blaauw and J. Bonada and E. Gómez},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {WGANSing: A Multi-Voice Singing Voice Synthesizer Based on the Wasserstein-GAN},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We present a deep neural network based singing voice synthesizer, inspired by the Deep Convolutions Generative Adversarial Networks (DCGAN) architecture and optimized using the Wasserstein-GAN algorithm. We use vocoder parameters for acoustic modelling, to separate the influence of pitch and timbre. This facilitates the modelling of the large variability of pitch in the singing voice. Our network takes a block of consecutive frame-wise linguistic and fundamental frequency features, along with global singer identity as input and outputs vocoder features, corresponding to the block of features. This block-wise approach, along with the training methodology allows us to model temporal dependencies within the features of the input block. For inference, sequential blocks are concatenated using an overlap-add procedure. We show that the performance of our model is competitive with regards to the state-of-the-art and the original sample using objective metrics and a subjective listening test. We also present examples of the synthesis on a supplementary website and the source code via GitHub.},\n  keywords = {acoustic signal processing;audio signal processing;convolutional neural nets;feature extraction;optimisation;speech processing;speech synthesis;vocoders;multivoice singing voice synthesizer;Wasserstein-GAN algorithm;vocoder parameters;acoustic modelling;consecutive frame-wise linguistic feature;global singer identity;block-wise approach;model temporal dependencies;sequential blocks;WGANSing;deep neural network based singing voice synthesizer;deep convolutions generative adversarial networks architecture;DCGAN;fundamental frequency features;overlap-add procedure;subjective listening test;source code;GitHub;supplementary Website;Vocoders;Gallium nitride;Generators;Generative adversarial networks;Adaptation models;Acoustics;Training;Wasserstein-GAN;DCGAN;WORLD vocoder;Singing Voice Synthesis;Block-wise Predictions},\n  doi = {10.23919/EUSIPCO.2019.8903099},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529235.pdf},\n}\n\n
\n
\n\n\n
\n We present a deep neural network based singing voice synthesizer, inspired by the Deep Convolutions Generative Adversarial Networks (DCGAN) architecture and optimized using the Wasserstein-GAN algorithm. We use vocoder parameters for acoustic modelling, to separate the influence of pitch and timbre. This facilitates the modelling of the large variability of pitch in the singing voice. Our network takes a block of consecutive frame-wise linguistic and fundamental frequency features, along with global singer identity as input and outputs vocoder features, corresponding to the block of features. This block-wise approach, along with the training methodology allows us to model temporal dependencies within the features of the input block. For inference, sequential blocks are concatenated using an overlap-add procedure. We show that the performance of our model is competitive with regards to the state-of-the-art and the original sample using objective metrics and a subjective listening test. We also present examples of the synthesis on a supplementary website and the source code via GitHub.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hyperspectral complex domain denoising.\n \n \n \n \n\n\n \n Katkovnik, V.; Shevkunov, I.; and Egiazarian, K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"HyperspectralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903100,\n  author = {V. Katkovnik and I. Shevkunov and K. Egiazarian},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Hyperspectral complex domain denoising},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We consider hyperspectral complex domain imaging from hyperspectral complex-valued noisy observations. The proposed algorithm is based on singular value decomposition (SVD) of observations and complex domain block-matching 3D (CDBM3D) filtering in optimized SVD eigenspace. Simulation experiments demonstrate high efficiency of the proposed complex domain joint filtering of hyperspectral data in comparison with CDBM3D filtering of separate 2D slices of hyperspectral cubes as well as with respect to joint real domain independent phase/amplitude filtering this kind of data.},\n  keywords = {filtering theory;image denoising;singular value decomposition;hyperspectral complex domain denoising;hyperspectral complex domain imaging;noisy observations;optimized SVD eigenspace;complex domain joint filtering;hyperspectral data;CDBM3D filtering;hyperspectral cubes;complex domain block-matching 3D filtering;Signal processing algorithms;Two dimensional displays;Three-dimensional displays;Hyperspectral imaging;Noise reduction;Thresholding (Imaging);Hyperspectral imaging;singular value decomposition;sparse representation;noise filtering;noise in imaging systems},\n  doi = {10.23919/EUSIPCO.2019.8903100},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531490.pdf},\n}\n\n
\n
\n\n\n
\n We consider hyperspectral complex domain imaging from hyperspectral complex-valued noisy observations. The proposed algorithm is based on singular value decomposition (SVD) of observations and complex domain block-matching 3D (CDBM3D) filtering in optimized SVD eigenspace. Simulation experiments demonstrate high efficiency of the proposed complex domain joint filtering of hyperspectral data in comparison with CDBM3D filtering of separate 2D slices of hyperspectral cubes as well as with respect to joint real domain independent phase/amplitude filtering this kind of data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compressive Digital Receiver: First Results on Sensitivity, Dynamic Range and Instantaneous Bandwidth Measurements.\n \n \n \n \n\n\n \n Korucu, A. B.; Alp, Y. K.; Gok, G.; and Arikan, O.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CompressivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903101,\n  author = {A. B. Korucu and Y. K. Alp and G. Gok and O. Arikan},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Compressive Digital Receiver: First Results on Sensitivity, Dynamic Range and Instantaneous Bandwidth Measurements},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this work, sensitivity, one/two-signal dynamic range and instantaneous bandwidth measurement results of the recently developed Compressive Digital Receiver (CDR) hardware for Electronic Support Measures (ESM) applications, will be reported for the first time. Developed CDR is a compressive sensing based sub-Nyquist sampling receiver, which can monitor 2.25 GHz bandwidth instantaneously by using four ADCs (Analog-to-Digital Receiver) each of which has a sampling rate of 250 MHz. All the digital processing blocks of the CDR are implemented in Field Programmable Gate Array (FPGA) and they work in real time. It is observed that the sensitivity and dynamic range of the CDR changes with respect to input signal frequency. For 2.25 GHz bandwidth, the best and worst sensitivity values of the CDR are reported as -62 dBm and -41 dBm, respectively. One-signal dynamic range of CDR is measured as at least 60 dB for the whole band. The best and worst values of the two-signal dynamic rage values are observed as 42 dB and 20 dB, respectively.},\n  keywords = {analogue-digital conversion;compressed sensing;field programmable gate arrays;receivers;signal sampling;ADCs;FPGA;compressive sensing based sub-Nyquist sampling receiver;ESM applications;instantaneous bandwidth measurement;compressive digital receiver hardware;two-signal dynamic rage values;one-signal dynamic range;worst sensitivity values;input signal frequency;CDR;Field Programmable Gate Array;digital processing blocks;sampling rate;Analog-to-Digital Receiver;electronic support measure applications;instantaneous bandwidth measurements;frequency 250.0 MHz;bandwidth 2.25 GHz;Bandwidth;Dynamic range;Field programmable gate arrays;Sensitivity;Frequency modulation;Hardware;Receivers;compressive sensing;digital receiver;sub-Nyquist sampling;sensitivity;one/two-signal dynamic range;instantaneous bandwidth},\n  doi = {10.23919/EUSIPCO.2019.8903101},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533548.pdf},\n}\n\n
\n
\n\n\n
\n In this work, sensitivity, one/two-signal dynamic range and instantaneous bandwidth measurement results of the recently developed Compressive Digital Receiver (CDR) hardware for Electronic Support Measures (ESM) applications, will be reported for the first time. Developed CDR is a compressive sensing based sub-Nyquist sampling receiver, which can monitor 2.25 GHz bandwidth instantaneously by using four ADCs (Analog-to-Digital Receiver) each of which has a sampling rate of 250 MHz. All the digital processing blocks of the CDR are implemented in Field Programmable Gate Array (FPGA) and they work in real time. It is observed that the sensitivity and dynamic range of the CDR changes with respect to input signal frequency. For 2.25 GHz bandwidth, the best and worst sensitivity values of the CDR are reported as -62 dBm and -41 dBm, respectively. One-signal dynamic range of CDR is measured as at least 60 dB for the whole band. The best and worst values of the two-signal dynamic rage values are observed as 42 dB and 20 dB, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Tracking Recurring Patterns in Time Series Using Dynamic Time Warping.\n \n \n \n \n\n\n \n van der Vlist , R.; Taal, C.; and Heusdens, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TrackingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903102,\n  author = {R. {van der Vlist} and C. Taal and R. Heusdens},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Tracking Recurring Patterns in Time Series Using Dynamic Time Warping},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Dynamic time warping (DTW) is a distance measure to compare time series that exhibit similar patterns. In this paper, we will show how the warping path of the DTW algorithm can be interpreted, and a framework is proposed to extend the DTW algorithm. Using this framework, we will show how the dynamic programming structure of the DTW algorithm can be used to track repeating patterns in time series.},\n  keywords = {dynamic programming;pattern matching;time series;DTW algorithm;dynamic programming structure;repeating patterns;time series;dynamic time warping;distance measure;warping path;Signal processing algorithms;Time series analysis;Heuristic algorithms;Indexes;Time-frequency analysis;Europe;Signal processing;dynamic programming;dynamic time warping;time series analysis},\n  doi = {10.23919/EUSIPCO.2019.8903102},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533373.pdf},\n}\n\n
\n
\n\n\n
\n Dynamic time warping (DTW) is a distance measure to compare time series that exhibit similar patterns. In this paper, we will show how the warping path of the DTW algorithm can be interpreted, and a framework is proposed to extend the DTW algorithm. Using this framework, we will show how the dynamic programming structure of the DTW algorithm can be used to track repeating patterns in time series.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sampling Rate and Bits Per Sample Tradeoff for Cloud MIMO Radar Target Detection.\n \n \n \n \n\n\n \n Wang, Z.; He, Q.; and Blum, R. S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SamplingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903103,\n  author = {Z. Wang and Q. He and R. S. Blum},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Sampling Rate and Bits Per Sample Tradeoff for Cloud MIMO Radar Target Detection},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, target detection is studied for a cloud multiple-input multiple-output (MIMO) radar system, where each receiver communicates with a fusion center (FC) through a backhaul network. To reduce communication burden, local measurements at each receiver are quantized before they are sent to the FC. Under a bitrate constraint for each local sensor, we derive the detection probability of the cloud radar and analyze effects of the sampling rate and bits per sample on the detection performance. The quantizer output is initially modeled using direct analysis (DA), and then the Gaussian quantization error approximation (GQEA) method is employed to facilitate theoretical analysis. We verify that these two methods lead to close enough detection performance for large enough number of bits per sample. The tradeoff between the sampling rate and bits per sample is presented analytically and numerically.},\n  keywords = {MIMO radar;object detection;probability;quantisation (signal);radar detection;radar receivers;sampling rate;sample tradeoff;cloud MIMO radar target detection;cloud multiple-input multiple-output radar system;fusion center;backhaul network;local measurements;local sensor;detection probability;cloud radar;detection performance;quantizer output;Gaussian quantization error approximation method;Quantization (signal);Bit rate;Cloud computing;Radar;Receivers;Transmitters;Object detection;MIMO radar;quantization;detection;sampling rate},\n  doi = {10.23919/EUSIPCO.2019.8903103},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529227.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, target detection is studied for a cloud multiple-input multiple-output (MIMO) radar system, where each receiver communicates with a fusion center (FC) through a backhaul network. To reduce communication burden, local measurements at each receiver are quantized before they are sent to the FC. Under a bitrate constraint for each local sensor, we derive the detection probability of the cloud radar and analyze effects of the sampling rate and bits per sample on the detection performance. The quantizer output is initially modeled using direct analysis (DA), and then the Gaussian quantization error approximation (GQEA) method is employed to facilitate theoretical analysis. We verify that these two methods lead to close enough detection performance for large enough number of bits per sample. The tradeoff between the sampling rate and bits per sample is presented analytically and numerically.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Neural Network Aided Computation of Generalized Spatial Modulation Capacity.\n \n \n \n \n\n\n \n Tato, A.; Mosquera, C.; Henarejos, P.; and Pérez-Neira, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"NeuralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903104,\n  author = {A. Tato and C. Mosquera and P. Henarejos and A. Pérez-Neira},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Neural Network Aided Computation of Generalized Spatial Modulation Capacity},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Generalized Spatial Modulation (GSM) is being considered for future high-capacity and energy efficient terrestrial networks. A variant such as Polarized Modulation (PMod) has also a role in Dual Polarization Mobile Satellite Systems. The implementation of adaptive GSM systems requires fast methods to evaluate the channel dependent GSM capacity, which amounts to solve multi-dimensional integrals without closed-form solutions. For this purpose, we propose the use of a Multilayer Feedforward Neural Network and an associated feature selection algorithm. The resulting method is highly accurate and with much lower complexity than alternative numerical methods.},\n  keywords = {channel capacity;feedforward neural nets;mobile satellite communication;modulation;telecommunication computing;telecommunication power management;wireless channels;channel dependent GSM capacity;Multilayer Feedforward Neural Network;Neural Network aided computation;generalized Spatial Modulation capacity;energy efficient terrestrial networks;Dual Polarization Mobile Satellite Systems;adaptive GSM systems;Polarized Modulation;GSM;Modulation;Neural networks;Channel capacity;Feature extraction;Transmitting antennas;Index Modulations;Generalized Spatial Modulation;Polarized Modulation;Machine Learning;Multilayer Feedforward Neural Network},\n  doi = {10.23919/EUSIPCO.2019.8903104},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532184.pdf},\n}\n\n
\n
\n\n\n
\n Generalized Spatial Modulation (GSM) is being considered for future high-capacity and energy efficient terrestrial networks. A variant such as Polarized Modulation (PMod) has also a role in Dual Polarization Mobile Satellite Systems. The implementation of adaptive GSM systems requires fast methods to evaluate the channel dependent GSM capacity, which amounts to solve multi-dimensional integrals without closed-form solutions. For this purpose, we propose the use of a Multilayer Feedforward Neural Network and an associated feature selection algorithm. The resulting method is highly accurate and with much lower complexity than alternative numerical methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Audio-Visual Speech Enhancement using Hierarchical Extreme Learning Machine.\n \n \n \n \n\n\n \n Hussain, T.; Tsao, Y.; Wang, H. -.; Wang, J. -.; Siniscalchi, S. M.; and Liao, W. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Audio-VisualPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903105,\n  author = {T. Hussain and Y. Tsao and H. -M. Wang and J. -C. Wang and S. M. Siniscalchi and W. -H. Liao},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Audio-Visual Speech Enhancement using Hierarchical Extreme Learning Machine},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Recently, the hierarchical extreme learning machine (HELM) model has been utilized for speech enhancement (SE) and demonstrated promising performance, especially when the amount of training data is limited and the system does not support heavy computations. Based on the success of audio-onlybased systems, termed AHELM, we propose a novel audio-visual HELM-based SE system, termed AVHELM that integrates the audio and visual information to confrontate the unseen nonstationery noise problem at low SNR levels to attain improved SE performance. The experimental results demonstrate that AVHELM can yield satisfactory enhancement performance with a limited amount of training data and outperforms AHELM in terms of three standardized objective measures under matched and mismatched testing conditions, confirming the effectiveness of incorporating visual information into the HELM-based SE system.},\n  keywords = {acoustic signal processing;audio-visual systems;filtering theory;learning (artificial intelligence);signal denoising;speech enhancement;audio-visual speech enhancement;hierarchical extreme learning machine model;audio-onlybased systems;termed AVHELM;audio information;visual information;unseen nonstationery noise problem;low SNR levels;improved SE performance;satisfactory enhancement performance;audio-visual HELM-based SE system;Noise measurement;Visualization;Speech enhancement;Testing;Signal to noise ratio;Training;Training data;Speech Enhancement;Hierarchical Extreme Learning Machine;Audio-Visual;Multi-Modal},\n  doi = {10.23919/EUSIPCO.2019.8903105},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534159.pdf},\n}\n\n
\n
\n\n\n
\n Recently, the hierarchical extreme learning machine (HELM) model has been utilized for speech enhancement (SE) and demonstrated promising performance, especially when the amount of training data is limited and the system does not support heavy computations. Based on the success of audio-onlybased systems, termed AHELM, we propose a novel audio-visual HELM-based SE system, termed AVHELM that integrates the audio and visual information to confrontate the unseen nonstationery noise problem at low SNR levels to attain improved SE performance. The experimental results demonstrate that AVHELM can yield satisfactory enhancement performance with a limited amount of training data and outperforms AHELM in terms of three standardized objective measures under matched and mismatched testing conditions, confirming the effectiveness of incorporating visual information into the HELM-based SE system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multimodal Image Super-resolution via Deep Unfolding with Side Information.\n \n \n \n \n\n\n \n Marivani, I.; Tsiligianni, E.; Cornelis, B.; and Deligiannis, N.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"MultimodalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903106,\n  author = {I. Marivani and E. Tsiligianni and B. Cornelis and N. Deligiannis},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Multimodal Image Super-resolution via Deep Unfolding with Side Information},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Deep learning methods have been successfully applied to various computer vision tasks. However, existing neural network architectures do not per se incorporate domain knowledge about the addressed problem, thus, understanding what the model has learned is an open research topic. In this paper, we rely on the unfolding of an iterative algorithm for sparse approximation with side information, and design a deep learning architecture for multimodal image super-resolution that incorporates sparse priors and effectively utilizes information from another image modality. We develop two deep models performing reconstruction of a high-resolution image of a target image modality from its low-resolution variant with the aid of a high-resolution image from a second modality. We apply the proposed models to super-resolve near-infrared images using as side information high-resolution RGB images. Experimental results demonstrate the superior performance of the proposed models against state-of-the-art methods including unimodal and multimodal approaches.},\n  keywords = {approximation theory;computer vision;image colour analysis;image reconstruction;image resolution;infrared imaging;iterative methods;learning (artificial intelligence);neural net architecture;multimodal image super-resolution;deep unfolding;computer vision tasks;neural network architectures;domain knowledge;deep learning architecture;deep models;high-resolution image;target image modality;low-resolution image;near-infrared images;high-resolution RGB images;deep learning methods;iterative algorithm;sparse approximation;image reconstruction;Deep learning;Neural networks;Sparse representation;Computational modeling;Computer architecture;Image super-resolution;sparse coding;multimodal deep learning;designing neural networks},\n  doi = {10.23919/EUSIPCO.2019.8903106},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533836.pdf},\n}\n\n
\n
\n\n\n
\n Deep learning methods have been successfully applied to various computer vision tasks. However, existing neural network architectures do not per se incorporate domain knowledge about the addressed problem, thus, understanding what the model has learned is an open research topic. In this paper, we rely on the unfolding of an iterative algorithm for sparse approximation with side information, and design a deep learning architecture for multimodal image super-resolution that incorporates sparse priors and effectively utilizes information from another image modality. We develop two deep models performing reconstruction of a high-resolution image of a target image modality from its low-resolution variant with the aid of a high-resolution image from a second modality. We apply the proposed models to super-resolve near-infrared images using as side information high-resolution RGB images. Experimental results demonstrate the superior performance of the proposed models against state-of-the-art methods including unimodal and multimodal approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automated forensic ink determination in handwritten documents by clustering.\n \n \n \n \n\n\n \n Kalbitz, M.; and Vielhauer, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AutomatedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903107,\n  author = {M. Kalbitz and C. Vielhauer},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Automated forensic ink determination in handwritten documents by clustering},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Even in today`s highly digitalized world, the use of handwriting is still widely in use for legal documents such as testaments, contracts, bank cheques or professional certificates. Thus, forgery analysis of handwriting still poses challenges for criminalistics forensic document examiners and one of the investigation questions is, if a questioned document has been written with more than one ink. If so, this may indicate a forgery by manipulations of a possible counterfeiter after production of the genuine original. By means of chemical analysis, it is possible today to identify an ink with almost 100% certainty. However, this process is manual, tedious and needs an initial suspicion by a human expert, that more than one ink was applied on a specific area on the document. Further, most chemical approaches are destructive and limited to very small areas. To improve and to automate the initial investigation on the use of multiple inks, this work proposes a pattern recognition approach based on signal processing, feature extraction and classification by data clustering, which is based on spectral imaging, acquired in almost non-destructive and contact-less manner. The goal is to support forensic examiners by an automated digital detection of regions, which have been written using different ink, which they then can further examine. For experimental evaluation, a benchmark is introduced to evaluate the accuracy of detection results on a specifically created test set, which is also presented. Test results indicate that the best clustering in our investigation has been achieved by the expectation maximisation (EM) approach, with a correct ink cover rate of above 80% for the first and 73% for the second ink in average. Even more relevant for forensic experts is the observation, that false detections occurred in less than 1% of the cases in average. Future work will include extension of data sets and automatic analysis and parameter adjustments in the clustering process.},\n  keywords = {document image processing;feature extraction;handwriting recognition;handwritten character recognition;image forensics;ink;pattern clustering;automated forensic ink determination;handwritten documents;handwriting;legal documents;bank cheques;professional certificates;forgery analysis;criminalistics forensic document examiners;investigation questions;questioned document;possible counterfeiter;chemical analysis;initial suspicion;chemical approaches;initial investigation;multiple inks;pattern recognition approach;data clustering;contact-less;forensic examiners;different ink;specifically created test set;correct ink cover rate;forensic experts;automatic analysis;parameter adjustments;clustering process;Ink;Signal processing;Forensics;Wavelength measurement;Feature extraction;Benchmark testing;Image color analysis;Forensics;Pattern clustering;Handwriting recognition;Forgery;Spectral analysis;spectroscopy;ink},\n  doi = {10.23919/EUSIPCO.2019.8903107},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529926.pdf},\n}\n\n
\n
\n\n\n
\n Even in today`s highly digitalized world, the use of handwriting is still widely in use for legal documents such as testaments, contracts, bank cheques or professional certificates. Thus, forgery analysis of handwriting still poses challenges for criminalistics forensic document examiners and one of the investigation questions is, if a questioned document has been written with more than one ink. If so, this may indicate a forgery by manipulations of a possible counterfeiter after production of the genuine original. By means of chemical analysis, it is possible today to identify an ink with almost 100% certainty. However, this process is manual, tedious and needs an initial suspicion by a human expert, that more than one ink was applied on a specific area on the document. Further, most chemical approaches are destructive and limited to very small areas. To improve and to automate the initial investigation on the use of multiple inks, this work proposes a pattern recognition approach based on signal processing, feature extraction and classification by data clustering, which is based on spectral imaging, acquired in almost non-destructive and contact-less manner. The goal is to support forensic examiners by an automated digital detection of regions, which have been written using different ink, which they then can further examine. For experimental evaluation, a benchmark is introduced to evaluate the accuracy of detection results on a specifically created test set, which is also presented. Test results indicate that the best clustering in our investigation has been achieved by the expectation maximisation (EM) approach, with a correct ink cover rate of above 80% for the first and 73% for the second ink in average. Even more relevant for forensic experts is the observation, that false detections occurred in less than 1% of the cases in average. Future work will include extension of data sets and automatic analysis and parameter adjustments in the clustering process.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Semiparametric Stochastic CRB for DOA Estimation in Elliptical Data Model.\n \n \n \n \n\n\n \n Fortunati, S.; Gini, F.; and Greco, M. S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SemiparametricPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903108,\n  author = {S. Fortunati and F. Gini and M. S. Greco},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Semiparametric Stochastic CRB for DOA Estimation in Elliptical Data Model},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper aims at presenting a numerical investigation of the statistical efficiency of the MUSIC (with different covariance matrix estimates) and the IAA-APES Direction of Arrivals (DOAs) estimation algorithms under a general Complex Elliptically Symmetric (CES) distributed measurement model. Specifically, the density generator of the CES-distributed data snapshots is considered as an additional, infinite-dimensional, nuisance parameter. To assess the efficiency in the considered semiparametric setting, the Semiparametric Stochastic Cramér-Rao Bound (SSCRB) is adopted as lower bound for the Mean Square Error (MSE) of the DOA estimators.},\n  keywords = {covariance matrices;direction-of-arrival estimation;estimation theory;mean square error methods;statistical distributions;stochastic processes;IAA-APES Direction;CES-distributed data snapshots;infinite-dimensional parameter;nuisance parameter;DOA estimators;elliptical data model;semiparametric stochastic CRB;semiparametric stochastic Cramér-Rao bound;complex elliptically symmetric distributed measurement model;arrivals estimation algorithms;mean square error;MSE;SSCRB;Estimation;Covariance matrices;Direction-of-arrival estimation;Generators;Multiple signal classification;Stochastic processes;Sensor arrays;DOA estimation;Semiparametric model;Semiparametric Stochastic Cramér-Rao Bound},\n  doi = {10.23919/EUSIPCO.2019.8903108},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532667.pdf},\n}\n\n
\n
\n\n\n
\n This paper aims at presenting a numerical investigation of the statistical efficiency of the MUSIC (with different covariance matrix estimates) and the IAA-APES Direction of Arrivals (DOAs) estimation algorithms under a general Complex Elliptically Symmetric (CES) distributed measurement model. Specifically, the density generator of the CES-distributed data snapshots is considered as an additional, infinite-dimensional, nuisance parameter. To assess the efficiency in the considered semiparametric setting, the Semiparametric Stochastic Cramér-Rao Bound (SSCRB) is adopted as lower bound for the Mean Square Error (MSE) of the DOA estimators.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Volumetric Surface-guided Graph-based Segmentation of Cardiac Adipose Tissues on Fat-Water MR Images.\n \n \n \n \n\n\n \n Fallah, F.; Armanious, K.; Yang, B.; and Bamberg, F.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"VolumetricPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903109,\n  author = {F. Fallah and K. Armanious and B. Yang and F. Bamberg},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Volumetric Surface-guided Graph-based Segmentation of Cardiac Adipose Tissues on Fat-Water MR Images},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Different endocrine roles of cardiac adipose tissues motivate the analysis of their volumes and compositions on large cohort image data sets. This, however, demands reliable robust methods for automated segmentations as manual segmentations are tedious costly and unreproducible. Besides the effects of noise and partial volumes, segmentation of these adipose tissues on clinical medical images is challenged by their similar intensities and features and undetectability of their boundaries. In this paper, we present a feature- and prior-based random walker graph that additionally incorporates a diffusion-based susceptible infected recovered model to guide the segmentation by the curvatures of the surface of the segmented cardiac structures. This method is trained and evaluated for segmenting epicardial, pericardial, and perivascular adipose tissues on volumetric fat-water magnetic resonance images. The obtained results demonstrate its utility for large cohort investigation of these adipose compartments and also any other segmentation task on multichannel images.},\n  keywords = {biological tissues;biomedical MRI;cardiology;fats;image segmentation;medical image processing;physiological models;random processes;volumetric surface-guided graph-based segmentation;cardiac adipose tissues;fat-water MR images;cohort image data sets;partial volumes;clinical medical images;random walker graph;segmented cardiac structures;epicardial adipose tissues;perivascular adipose tissues;fat-water magnetic resonance images;adipose compartments;multichannel images;pericardial adipose tissues;diffusion-based susceptible infected recovered model;endocrine roles;Image segmentation;Fats;Training;Signal processing;Motion segmentation;Histograms;Myocardium;Random Walker Algorithm;Feature and Prior Learning;Diffusion-based Susceptible Infected Recovered Model;Surface Curvature;Cardiac Adipose Tissues},\n  doi = {10.23919/EUSIPCO.2019.8903109},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529417.pdf},\n}\n\n
\n
\n\n\n
\n Different endocrine roles of cardiac adipose tissues motivate the analysis of their volumes and compositions on large cohort image data sets. This, however, demands reliable robust methods for automated segmentations as manual segmentations are tedious costly and unreproducible. Besides the effects of noise and partial volumes, segmentation of these adipose tissues on clinical medical images is challenged by their similar intensities and features and undetectability of their boundaries. In this paper, we present a feature- and prior-based random walker graph that additionally incorporates a diffusion-based susceptible infected recovered model to guide the segmentation by the curvatures of the surface of the segmented cardiac structures. This method is trained and evaluated for segmenting epicardial, pericardial, and perivascular adipose tissues on volumetric fat-water magnetic resonance images. The obtained results demonstrate its utility for large cohort investigation of these adipose compartments and also any other segmentation task on multichannel images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Weighted Subset Selection for Fast SVM Training.\n \n \n \n \n\n\n \n Mourad, S.; Tewfik, A.; and Vikalo, H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"WeightedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903110,\n  author = {S. Mourad and A. Tewfik and H. Vikalo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Weighted Subset Selection for Fast SVM Training},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We propose a data reduction method that improves the speed of training the support vector machine (SVM) algorithm. In particular, we study the problem of finding a weighted subset of training data to efficiently train an SVM while providing performance guarantees. Relying on approximate nearest neighborhood properties, the proposed method selects relevant points and employs the concept of maximal independent set to achieve desired coverage of the training dataset. Performance guarantees are provided, demonstrating that the proposed approach enables faster SVM training with minimal effect on the accuracy. Empirical results demonstrate that the proposed method outperforms existing weighted subset selection techniques for SVM training.},\n  keywords = {data reduction;learning (artificial intelligence);support vector machines;approximate nearest neighborhood properties;maximal independent set;training dataset;weighted subset selection techniques;SVM training;data reduction method;support vector machine algorithm;Support vector machines;Training;Approximation algorithms;Signal processing algorithms;Kernel;Europe;Signal processing;SVM;nearest neighbors;independent set},\n  doi = {10.23919/EUSIPCO.2019.8903110},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533395.pdf},\n}\n\n
\n
\n\n\n
\n We propose a data reduction method that improves the speed of training the support vector machine (SVM) algorithm. In particular, we study the problem of finding a weighted subset of training data to efficiently train an SVM while providing performance guarantees. Relying on approximate nearest neighborhood properties, the proposed method selects relevant points and employs the concept of maximal independent set to achieve desired coverage of the training dataset. Performance guarantees are provided, demonstrating that the proposed approach enables faster SVM training with minimal effect on the accuracy. Empirical results demonstrate that the proposed method outperforms existing weighted subset selection techniques for SVM training.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Shuffled Bits in the Low-Detectability Regime.\n \n \n \n \n\n\n \n Marano, S.; and Willett, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ShuffledPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903111,\n  author = {S. Marano and P. Willett},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Shuffled Bits in the Low-Detectability Regime},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We consider a decision problem in which data are unordered (unlabeled). Recent studies of this problem provide a complete asymptotic characterization of the decision performance for large data size, which is the solution of a convex optimization problem. While this is fully satisfactory from a numerical viewpoint, limited insight is offered because a closed-form explicit expression for the decision performance is, in general, not available. For binary observations and the challenging regime of low-detectability, we derive an extremely simple analytical solution, investigate its properties and discuss the obtained physical insights.},\n  keywords = {convex programming;signal detection;low-detectability regime;decision problem;complete asymptotic characterization;decision performance;convex optimization problem;closed-form explicit expression;Signal processing;Task analysis;Standards;Signal processing algorithms;Europe;Network analyzers;Random variables;Unlabeled detection;Unordered data;Permuted observations;Error exponent function},\n  doi = {10.23919/EUSIPCO.2019.8903111},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529162.pdf},\n}\n\n
\n
\n\n\n
\n We consider a decision problem in which data are unordered (unlabeled). Recent studies of this problem provide a complete asymptotic characterization of the decision performance for large data size, which is the solution of a convex optimization problem. While this is fully satisfactory from a numerical viewpoint, limited insight is offered because a closed-form explicit expression for the decision performance is, in general, not available. For binary observations and the challenging regime of low-detectability, we derive an extremely simple analytical solution, investigate its properties and discuss the obtained physical insights.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Curriculum-based Teacher Ensemble for Robust Neural Network Distillation.\n \n \n \n \n\n\n \n Panagiotatos, G.; Passalis, N.; Iosifidis, A.; Gabbouj, M.; and Tefas, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Curriculum-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903112,\n  author = {G. Panagiotatos and N. Passalis and A. Iosifidis and M. Gabbouj and A. Tefas},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Curriculum-based Teacher Ensemble for Robust Neural Network Distillation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Neural network distillation is used for transferring the knowledge from a complex teacher network into a lightweight student network, improving in this way the performance of the student network. However, neural distillation does not always lead to consistent results, with several factors affecting the efficiency of the knowledge distillation process. In this paper it is experimentally demonstrated that the selected teacher can indeed have a significant effect on knowledge transfer. To overcome this limitation, we propose a curriculum-based teacher ensemble that allows for performing robust and efficient knowledge distillation. The proposed method is motivated by the way that humans learn through a curriculum, as well as supported by recent findings that hints to the existence of critical learning periods in neural networks. The effectiveness of the proposed approach, compared to various distillation variants, is demonstrated using three image datasets and different network architectures.},\n  keywords = {computer aided instruction;neural nets;teaching;curriculum-based teacher ensemble;robust neural network distillation;complex teacher network;lightweight student network;neural distillation;knowledge distillation process;knowledge transfer;neural networks;Training;Neural networks;Entropy;Knowledge transfer;Knowledge engineering;Europe;neural network distillation;knowledge transfer;curriculum-based distillation;lightweight deep learning},\n  doi = {10.23919/EUSIPCO.2019.8903112},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533390.pdf},\n}\n\n
\n
\n\n\n
\n Neural network distillation is used for transferring the knowledge from a complex teacher network into a lightweight student network, improving in this way the performance of the student network. However, neural distillation does not always lead to consistent results, with several factors affecting the efficiency of the knowledge distillation process. In this paper it is experimentally demonstrated that the selected teacher can indeed have a significant effect on knowledge transfer. To overcome this limitation, we propose a curriculum-based teacher ensemble that allows for performing robust and efficient knowledge distillation. The proposed method is motivated by the way that humans learn through a curriculum, as well as supported by recent findings that hints to the existence of critical learning periods in neural networks. The effectiveness of the proposed approach, compared to various distillation variants, is demonstrated using three image datasets and different network architectures.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Iterative Wiener Filtering for Deconvolution with Ringing Artifact Suppression.\n \n \n \n \n\n\n \n Šroubek, F.; Kerepecký, T.; and Kamenický, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"IterativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903114,\n  author = {F. {Šroubek} and T. Kerepecký and J. Kamenický},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Iterative Wiener Filtering for Deconvolution with Ringing Artifact Suppression},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Sensor and lens blur degrade images acquired by digital cameras. Simple and fast removal of blur using linear filtering, such as Wiener filter, produces results that are not acceptable in most of the cases due to ringing artifacts close to image borders and around edges in the image. More elaborate deconvolution methods with non-smooth regularization, such as total variation, provide superior performance with less artifacts, however at a price of increased computational cost. We consider the alternating directions method of multipliers, which is a popular choice to solve such non-smooth convex problems, and show that individual steps of the method can be decomposed to simple filtering and element-wise operations. Filtering is performed with two sets of filters, called restoration and update filters, which are learned for the given type of blur and noise level with two different learning methods. The proposed deconvolution algorithm is implemented in the spatial domain and can be easily extended to include other restoration tasks such as demosaicing and super-resolution. Experiments demonstrate performance of the algorithm with respect to the size of learned filters, number of iterations, noise level and type of blur.},\n  keywords = {convex programming;deconvolution;image filtering;image restoration;iterative methods;learning (artificial intelligence);Wiener filters;linear filtering;image borders;deconvolution methods;nonsmooth regularization;nonsmooth convex problems;alternating directions method of multipliers;element-wise operations;iterative Wiener filtering;lens;digital cameras;learned filter methods;ringing artifact suppression;image demosaicing;Image restoration;Convolution;Deconvolution;Signal processing algorithms;Optimization;Europe;Wiener filter;LMMSE;deconvolution;total variation;ADMM;non-smooth optimization},\n  doi = {10.23919/EUSIPCO.2019.8903114},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531236.pdf},\n}\n\n
\n
\n\n\n
\n Sensor and lens blur degrade images acquired by digital cameras. Simple and fast removal of blur using linear filtering, such as Wiener filter, produces results that are not acceptable in most of the cases due to ringing artifacts close to image borders and around edges in the image. More elaborate deconvolution methods with non-smooth regularization, such as total variation, provide superior performance with less artifacts, however at a price of increased computational cost. We consider the alternating directions method of multipliers, which is a popular choice to solve such non-smooth convex problems, and show that individual steps of the method can be decomposed to simple filtering and element-wise operations. Filtering is performed with two sets of filters, called restoration and update filters, which are learned for the given type of blur and noise level with two different learning methods. The proposed deconvolution algorithm is implemented in the spatial domain and can be easily extended to include other restoration tasks such as demosaicing and super-resolution. Experiments demonstrate performance of the algorithm with respect to the size of learned filters, number of iterations, noise level and type of blur.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Finding Common Image Semantics for Urban Perceived Safety Based on Pairwise Comparisons.\n \n \n \n \n\n\n \n Costa, G.; Soares, C.; and Marques, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"FindingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903115,\n  author = {G. Costa and C. Soares and M. Marques},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Finding Common Image Semantics for Urban Perceived Safety Based on Pairwise Comparisons},\n  year = {2019},\n  pages = {1-5},\n  abstract = {What influences people's perception of safety in an urban environment? Does everyone perceive safety the same way or do different people look for different contents in an image, safety-wise? We present a user analysis on a crowd-sourced dataset that contains pairwise comparisons regarding the perceived safety of street imagery from different municipalities in the greater Lisbon area, Portugal. We use state-of-the-art semantic segmentation to extract the contents of images and cluster different people according to what they perceive as safe. Then, we study semantic classes and analyze clusters of users for semantic elements appearing in images classified as safer (or more dangerous). The results show that clusters share a lot of similarities. Our analysis evidences that, for users with more pairwise comparisons, there is only one group, while spurious groupings appear when users contribute less. This result emphasizes that a pairwise image comparison dataset potentiates agreement of users in perceptual tasks, for moderate comparison data size.},\n  keywords = {domestic safety;geophysical image processing;image classification;image segmentation;public administration;town and country planning;common image semantics;urban perceived safety;pairwise comparisons;urban environment;user analysis;crowd-sourced dataset;street imagery;municipalities;greater Lisbon area;semantic classes;semantic elements;pairwise image comparison dataset;semantic segmentation;comparison data size;Semantics;Safety;Image segmentation;Europe;Signal processing;Couplings;Robots;Urban perceived safety;Semantic urban segmentation;Crowd-sourced perceptual dataset;Pairwise comparisons.},\n  doi = {10.23919/EUSIPCO.2019.8903115},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534013.pdf},\n}\n\n
\n
\n\n\n
\n What influences people's perception of safety in an urban environment? Does everyone perceive safety the same way or do different people look for different contents in an image, safety-wise? We present a user analysis on a crowd-sourced dataset that contains pairwise comparisons regarding the perceived safety of street imagery from different municipalities in the greater Lisbon area, Portugal. We use state-of-the-art semantic segmentation to extract the contents of images and cluster different people according to what they perceive as safe. Then, we study semantic classes and analyze clusters of users for semantic elements appearing in images classified as safer (or more dangerous). The results show that clusters share a lot of similarities. Our analysis evidences that, for users with more pairwise comparisons, there is only one group, while spurious groupings appear when users contribute less. This result emphasizes that a pairwise image comparison dataset potentiates agreement of users in perceptual tasks, for moderate comparison data size.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Connections between Reassigned Spectrum and Least Squares Estimation for Sinusoidal Models.\n \n \n \n \n\n\n \n Pantazis, Y.; Tsiaras, V.; and Stylianou, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ConnectionsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903116,\n  author = {Y. Pantazis and V. Tsiaras and Y. Stylianou},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Connections between Reassigned Spectrum and Least Squares Estimation for Sinusoidal Models},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The parameter estimation of sinusoidal signals, especially the frequency estimation is for decades a very challenging problem. Among the various frequency estimation methods, this paper compares and connects the reassigned spectrum and an iterative, nonlinear Least Squares method referred to as iQHM (iterative Quasi Harmonic Model). Interestingly, there are subtle connections between these two -seemingly different- iterative methods both in frequency as well in time domain. Moreover, inspired by the optimal performance of reassigned spectrum for mono-component sinusoidal signals, a variant of iQHM is proposed. The new method improves the performance of the original iQHM approach in frequency estimation by increasing the region of convergence by 40% on average.},\n  keywords = {frequency estimation;iterative methods;least squares approximations;signal processing;reassigned spectrum;iQHM;least squares estimation;sinusoidal models;parameter estimation;frequency estimation methods;nonlinear least squares method;iterative quasiharmonic model;Frequency estimation;Time-frequency analysis;Iterative methods;Spectrogram;Europe;Convergence;Sinusoidal models;Reassigned spectrum;Quasiharmonic model;Nonlinear least squares},\n  doi = {10.23919/EUSIPCO.2019.8903116},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533074.pdf},\n}\n\n
\n
\n\n\n
\n The parameter estimation of sinusoidal signals, especially the frequency estimation is for decades a very challenging problem. Among the various frequency estimation methods, this paper compares and connects the reassigned spectrum and an iterative, nonlinear Least Squares method referred to as iQHM (iterative Quasi Harmonic Model). Interestingly, there are subtle connections between these two -seemingly different- iterative methods both in frequency as well in time domain. Moreover, inspired by the optimal performance of reassigned spectrum for mono-component sinusoidal signals, a variant of iQHM is proposed. The new method improves the performance of the original iQHM approach in frequency estimation by increasing the region of convergence by 40% on average.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-Rank Data Matrix Recovery With Missing Values And Faulty Sensors.\n \n \n \n \n\n\n \n López-Valcarce, R.; and Sala-Alvarez, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Low-RankPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903117,\n  author = {R. López-Valcarce and J. Sala-Alvarez},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Low-Rank Data Matrix Recovery With Missing Values And Faulty Sensors},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In practice, data gathered by wireless sensor networks often belongs in a low-dimensional subspace, but it can present missing as well as corrupted values due to sensor malfunctioning and/or malicious attacks. We study the problem of Maximum Likelihood estimation of the low-rank factors of the underlying structure in such situation, and develop an Expectation-Maximization algorithm to this purpose, together with an effective initialization scheme. The proposed method outperforms previous schemes based on an initial faulty sensor identification stage, and is competitive in terms of complexity and performance with convex optimization-based matrix completion approaches.},\n  keywords = {convex programming;expectation-maximisation algorithm;matrix algebra;maximum likelihood estimation;security of data;wireless sensor networks;maximum likelihood estimation;low-rank factors;expectation-maximization algorithm;initialization scheme;initial faulty sensor identification stage;convex optimization-based matrix completion approaches;low-rank data matrix recovery;faulty sensors;wireless sensor networks;low-dimensional subspace;corrupted values;malicious attacks;sensor malfunctioning;Sensors;Wireless sensor networks;Maximum likelihood estimation;Europe;Signal processing;Data models;Distributed databases;Low-rank approximation;matrix completion;outliers;faulty sensors;wireless sensor networks},\n  doi = {10.23919/EUSIPCO.2019.8903117},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531560.pdf},\n}\n\n
\n
\n\n\n
\n In practice, data gathered by wireless sensor networks often belongs in a low-dimensional subspace, but it can present missing as well as corrupted values due to sensor malfunctioning and/or malicious attacks. We study the problem of Maximum Likelihood estimation of the low-rank factors of the underlying structure in such situation, and develop an Expectation-Maximization algorithm to this purpose, together with an effective initialization scheme. The proposed method outperforms previous schemes based on an initial faulty sensor identification stage, and is competitive in terms of complexity and performance with convex optimization-based matrix completion approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modelling Mismatch and Noise Statistics Uncertainty in Linear MMSE Estimation.\n \n \n \n \n\n\n \n Vilà-Valls, J.; Chaumette, E.; Vincent, F.; and Closas, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ModellingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903118,\n  author = {J. {Vilà-Valls} and E. Chaumette and F. Vincent and P. Closas},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Modelling Mismatch and Noise Statistics Uncertainty in Linear MMSE Estimation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Standard filtering techniques, such as Kalman, sigma-point or particle filters, assume a perfect knowledge of the system. This implies that both process and measurement functions, and the system noise statistics, are assumed known and fit the reality. Regarding the noise statistics this involves knowing not only the distributions but also their parameters. In this contribution, we explore the impact of system model mismatch and uncertain noise statistics parameters into linear minimum mean square error estimators for linear discrete state-space models. Illustrative examples are shown to support the discussion.},\n  keywords = {discrete time systems;filtering theory;Kalman filters;least mean squares methods;state-space methods;system noise statistics;system model mismatch;uncertain noise statistics parameters;linear minimum;square error estimators;linear discrete state-space models;modelling mismatch;noise statistics uncertainty;linear MMSE estimation;standard filtering techniques;Kalman filters;sigma-point filters;particle filters;measurement functions;Lead;Standards;Noise measurement;Covariance matrices;Estimation error;Uncertainty;Wiener filtering;Kalman filtering;model mismatch;system noise uncertainty;robustness},\n  doi = {10.23919/EUSIPCO.2019.8903118},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530825.pdf},\n}\n\n
\n
\n\n\n
\n Standard filtering techniques, such as Kalman, sigma-point or particle filters, assume a perfect knowledge of the system. This implies that both process and measurement functions, and the system noise statistics, are assumed known and fit the reality. Regarding the noise statistics this involves knowing not only the distributions but also their parameters. In this contribution, we explore the impact of system model mismatch and uncertain noise statistics parameters into linear minimum mean square error estimators for linear discrete state-space models. Illustrative examples are shown to support the discussion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compressive Chirp Transform for Estimation of Chirp Parameters.\n \n \n \n \n\n\n \n Irkhis, L. A. A.; and Shaw, A. K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CompressivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903119,\n  author = {L. A. A. Irkhis and A. K. Shaw},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Compressive Chirp Transform for Estimation of Chirp Parameters},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper develops a new algorithm for estimating the parameters of multiple chirp signals in noise. The proposed method uses Compressive Sensing (CS) formulation of the Discrete Chirp Fourier Transform (DCFT) basis to achieve superior estimator performance. Unlike Fourier or time-frequency based approaches, DCFT incorporates the underlying chirp signal model parameters in formulating the transform [1] -[4]. In this work a CS formulation exploits the parametric DCFT basis for fast recovery to achieve highly accurate parameter estimation results in polynomial time using Orthogonal Matching Pursuit (OMP). The performance of the proposed algorithm has been compared with existing methods via simulations.},\n  keywords = {approximation theory;chirp modulation;compressed sensing;discrete Fourier transforms;iterative methods;parameter estimation;polynomials;time-frequency analysis;Compressive Chirp Transform;Chirp parameters;multiple chirp signals;Compressive Sensing formulation;Discrete Chirp Fourier Transform basis;superior estimator performance;time-frequency;underlying chirp signal model parameters;CS formulation;parametric DCFT basis;highly accurate parameter estimation results;Chirp;Matching pursuit algorithms;Estimation;Frequency estimation;Discrete Fourier transforms;Chirp Parameter Estimation;Compressive Sensing;Discrete Chirp Fourier Transform (DCFT)},\n  doi = {10.23919/EUSIPCO.2019.8903119},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533969.pdf},\n}\n\n
\n
\n\n\n
\n This paper develops a new algorithm for estimating the parameters of multiple chirp signals in noise. The proposed method uses Compressive Sensing (CS) formulation of the Discrete Chirp Fourier Transform (DCFT) basis to achieve superior estimator performance. Unlike Fourier or time-frequency based approaches, DCFT incorporates the underlying chirp signal model parameters in formulating the transform [1] -[4]. In this work a CS formulation exploits the parametric DCFT basis for fast recovery to achieve highly accurate parameter estimation results in polynomial time using Orthogonal Matching Pursuit (OMP). The performance of the proposed algorithm has been compared with existing methods via simulations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Tracking Theory of Adaptive Filters with Input-Output Sampling Rate Offset.\n \n \n \n \n\n\n \n Thüne, P.; and Enzner, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TrackingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903120,\n  author = {P. Thüne and G. Enzner},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Tracking Theory of Adaptive Filters with Input-Output Sampling Rate Offset},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Adaptive filter theory for supervised identification of linear time-invariant (LTI) systems is an established and fruitful discipline in digital signal processing. In certain applications, however, the input and output signals of an LTI system may be asynchronously sampled at slightly different sampling frequencies resulting in a small input-output sampling rate offset (IO-SRO). In this contribution, we argue that an LTI system with IO-SRO is seen as a linear time-variant system by the adaptive filter. By conducting a convergence-in-the-mean analysis, we propose a model to capture the influence of IO-SRO on the tracking properties of the adaptive filter. Eventually, we validate our model by reconstruction of the IO-SRO based on the proposed model and the observable adaptive filter behavior. The model-based IO-SRO reconstruction turns out to be highly precise and robust against noise and excitation bandwidth limitations when compared to a state-of-the-art method.},\n  keywords = {adaptive filters;filtering theory;linear systems;signal reconstruction;signal sampling;input-output sampling rate offset;adaptive filter theory;digital signal processing;LTI system;linear time-variant system;model-based IO-SRO reconstruction;tracking theory;convergence-in-the-mean analysis;Linear systems;Adaptation models;Adaptive systems;Analog-digital conversion;Analytical models;Digital-analog conversion;Europe;Adaptive filter theory;supervised system identification;sampling rate offset;asynchronous sampling},\n  doi = {10.23919/EUSIPCO.2019.8903120},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528231.pdf},\n}\n\n
\n
\n\n\n
\n Adaptive filter theory for supervised identification of linear time-invariant (LTI) systems is an established and fruitful discipline in digital signal processing. In certain applications, however, the input and output signals of an LTI system may be asynchronously sampled at slightly different sampling frequencies resulting in a small input-output sampling rate offset (IO-SRO). In this contribution, we argue that an LTI system with IO-SRO is seen as a linear time-variant system by the adaptive filter. By conducting a convergence-in-the-mean analysis, we propose a model to capture the influence of IO-SRO on the tracking properties of the adaptive filter. Eventually, we validate our model by reconstruction of the IO-SRO based on the proposed model and the observable adaptive filter behavior. The model-based IO-SRO reconstruction turns out to be highly precise and robust against noise and excitation bandwidth limitations when compared to a state-of-the-art method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-Microphone Speaker Separation based on Deep DOA Estimation.\n \n \n \n \n\n\n \n Chazan, S. E.; Hammer, H.; Hazan, G.; Goldberger, J.; and Gannot, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-MicrophonePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903121,\n  author = {S. E. Chazan and H. Hammer and G. Hazan and J. Goldberger and S. Gannot},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-Microphone Speaker Separation based on Deep DOA Estimation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we present a multi-microphone speech separation algorithm based on masking inferred from the speakers direction of arrival (DOA). According to the W-disjoint orthogonality property of speech signals, each time-frequency (TF) bin is dominated by a single speaker. This TF bin can therefore be associated with a single DOA. In our procedure, we apply a deep neural network (DNN) with a U-net architecture to infer the DOA of each TF bin from a concatenated set of the spectra of the microphone signals. Separation is obtained by multiplying the reference microphone by the masks associated with the different DOAs. Our proposed deep direction estimation for speech separation (DDESS) method is inspired by the recent advances in deep clustering methods. Unlike already established methods that apply the clustering in a latent embedded space, in our approach the embedding is closely associated with the spatial information, as manifested by the different speakers' directions of arrival.},\n  keywords = {blind source separation;direction-of-arrival estimation;microphone arrays;neural nets;pattern clustering;speaker recognition;speech enhancement;time-frequency analysis;speakers direction;W-disjoint orthogonality property;speech signals;time-frequency bin;single speaker;TF bin;single DOA;deep neural network;U-net architecture;microphone signals;reference microphone;masks;deep direction estimation;deep clustering methods;multimicrophone speaker separation;deep DOA;multimicrophone speech separation algorithm;latent embedded space;Direction-of-arrival estimation;Microphones;Estimation;Training;Time-frequency analysis;Task analysis;Acoustics},\n  doi = {10.23919/EUSIPCO.2019.8903121},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533515.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present a multi-microphone speech separation algorithm based on masking inferred from the speakers direction of arrival (DOA). According to the W-disjoint orthogonality property of speech signals, each time-frequency (TF) bin is dominated by a single speaker. This TF bin can therefore be associated with a single DOA. In our procedure, we apply a deep neural network (DNN) with a U-net architecture to infer the DOA of each TF bin from a concatenated set of the spectra of the microphone signals. Separation is obtained by multiplying the reference microphone by the masks associated with the different DOAs. Our proposed deep direction estimation for speech separation (DDESS) method is inspired by the recent advances in deep clustering methods. Unlike already established methods that apply the clustering in a latent embedded space, in our approach the embedding is closely associated with the spatial information, as manifested by the different speakers' directions of arrival.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analysing Deep Learning-Spectral Envelope Prediction Methods for Singing Synthesis.\n \n \n \n \n\n\n \n Bous, F.; and Roebel, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AnalysingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903122,\n  author = {F. Bous and A. Roebel},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Analysing Deep Learning-Spectral Envelope Prediction Methods for Singing Synthesis},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We conduct an investigation on various hyperparameters regarding neural networks used to generate spectral envelopes for singing synthesis. Two perceptive tests, where the first compares two models directly and the other ranks models with a mean opinion score, are performed. With these tests we show that when learning to predict spectral envelopes, 2d-convolutions are superior over previously proposed 1d-convolutions and that predicting multiple frames in an iterated fashion during training is superior over injecting noise to the input data. An experimental investigation whether learning to predict a probability distribution vs. single samples was performed but turned out to be inconclusive. A network architecture is proposed that incorporates the improvements which we found to be useful and we show in our experiments that this network produces better results than other stat-of-the-art methods.},\n  keywords = {convolutional neural nets;learning (artificial intelligence);prediction theory;spectral analysis;speech coding;speech processing;statistical distributions;deep learning-spectral envelope prediction methods;singing synthesis;neural networks;spectral envelopes;perceptive tests;mean opinion score;network architecture;2d-convolution;1d-convolutions;probability distribution;Convolution;Two dimensional displays;Time-frequency analysis;Neural networks;Vocoders;Probability distribution;Europe;Singing synthesis;spectral envelopes;deep learning},\n  doi = {10.23919/EUSIPCO.2019.8903122},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534081.pdf},\n}\n\n
\n
\n\n\n
\n We conduct an investigation on various hyperparameters regarding neural networks used to generate spectral envelopes for singing synthesis. Two perceptive tests, where the first compares two models directly and the other ranks models with a mean opinion score, are performed. With these tests we show that when learning to predict spectral envelopes, 2d-convolutions are superior over previously proposed 1d-convolutions and that predicting multiple frames in an iterated fashion during training is superior over injecting noise to the input data. An experimental investigation whether learning to predict a probability distribution vs. single samples was performed but turned out to be inconclusive. A network architecture is proposed that incorporates the improvements which we found to be useful and we show in our experiments that this network produces better results than other stat-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Diffusion Maps Particle Filter.\n \n \n \n \n\n\n \n Forster, L.; Schmidt, A.; Kellermann, W.; Shnitzer, T.; and Talmon, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DiffusionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903123,\n  author = {L. Forster and A. Schmidt and W. Kellermann and T. Shnitzer and R. Talmon},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Diffusion Maps Particle Filter},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we propose a new nonparametric filtering framework combining manifold learning and particle filtering. Diffusion maps, a nonparametric manifold learning method, is applied to obtain a parametric state-space model, inferring the state coordinates, their dynamics, as well as the function that links the state to the noisy observations, in a purely data-driven manner. Then, based on the inferred parametric model, a particle filter is devised, facilitating the processing of high-dimensional noisy observations without rigid prior model assumptions. We demonstrate the performance of the proposed approach in a simulation of a challenging tracking problem with noisy observations and a hidden model.},\n  keywords = {learning (artificial intelligence);particle filtering (numerical methods);state-space methods;nonparametric filtering framework;particle filtering;nonparametric manifold learning method;parametric state-space model;inferred parametric model;high-dimensional noisy observations;diffusion map particle filter;Eigenvalues and eigenfunctions;Covariance matrices;Mathematical model;Noise measurement;Dynamical systems;Parametric statistics;Manifold learning;nonparametric filtering;non-linear filtering;sequential Markov chain Monte Carlo},\n  doi = {10.23919/EUSIPCO.2019.8903123},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531897.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a new nonparametric filtering framework combining manifold learning and particle filtering. Diffusion maps, a nonparametric manifold learning method, is applied to obtain a parametric state-space model, inferring the state coordinates, their dynamics, as well as the function that links the state to the noisy observations, in a purely data-driven manner. Then, based on the inferred parametric model, a particle filter is devised, facilitating the processing of high-dimensional noisy observations without rigid prior model assumptions. We demonstrate the performance of the proposed approach in a simulation of a challenging tracking problem with noisy observations and a hidden model.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Compensating for Object Variability in DNN–HMM Object-Centered Human Activity Recognition.\n \n \n \n\n\n \n Peng, Y.; Jančovič, P.; and Russell, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903124,\n  author = {Y. Peng and P. Jančovič and M. Russell},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Compensating for Object Variability in DNN–HMM Object-Centered Human Activity Recognition},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper describes a deep neural network - hidden Markov model (DNN-HMM) human activity recognition system based on instrumented objects and studies compensation strategies to deal with object variability. The sensors, comprising an accelerometer, gyroscope, magnetometer and force-sensitive resistors (FSRs), are packaged in a coaster attached to the base of an object, here a mug. Results are presented for recognition of actions involved in manipulating a mug. Evaluations are performed using over 24 hours of data recordings containing sequences of actions, labelled without time-stamp information. We demonstrate the importance of data alignments. While the DNN-HMM system achieved error rate below 0.1% for matched train-test conditions, this increased up to 26.5% for highly mismatched conditions. The error rate averaged over all conditions was 1.4% when using multi-condition training and decreased to 0.8% by employing feature augmentation. The use of FSR feature compensation, specific to weight variability, resulted in 0.24% error rate.},\n  keywords = {accelerometers;behavioural sciences computing;hidden Markov models;neural nets;object recognition;object variability;deep neural network;Markov model human activity recognition system;instrumented objects;studies compensation strategies;force-sensitive resistors;DNN-HMM system achieved error rate;matched train-test conditions;FSR feature compensation;DNN-HMM object-centered human activity recognition;specific to weight variability;Hidden Markov models;Training;Magnetic sensors;Testing;Instruments;Accelerometers;Action recognition;deep neural networks;hidden Markov models;DNN-HMM;sensors;instrumented objects;compensation;feature augmentation},\n  doi = {10.23919/EUSIPCO.2019.8903124},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n This paper describes a deep neural network - hidden Markov model (DNN-HMM) human activity recognition system based on instrumented objects and studies compensation strategies to deal with object variability. The sensors, comprising an accelerometer, gyroscope, magnetometer and force-sensitive resistors (FSRs), are packaged in a coaster attached to the base of an object, here a mug. Results are presented for recognition of actions involved in manipulating a mug. Evaluations are performed using over 24 hours of data recordings containing sequences of actions, labelled without time-stamp information. We demonstrate the importance of data alignments. While the DNN-HMM system achieved error rate below 0.1% for matched train-test conditions, this increased up to 26.5% for highly mismatched conditions. The error rate averaged over all conditions was 1.4% when using multi-condition training and decreased to 0.8% by employing feature augmentation. The use of FSR feature compensation, specific to weight variability, resulted in 0.24% error rate.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust and Responsive Acoustic Pairing of Devices Using Decorrelating Time-Frequency Modelling.\n \n \n \n \n\n\n \n Zarazaga, P. P.; B¨ackström, T.; and Sigg, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903125,\n  author = {P. P. Zarazaga and T. B¨ackström and S. Sigg},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust and Responsive Acoustic Pairing of Devices Using Decorrelating Time-Frequency Modelling},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Voice user interfaces have increased in popularity, as they enable natural interaction with different applications using one's voice. To improve their usability and audio quality, several devices could interact to provide a unified voice user interface. However, with devices cooperating and sharing voice-related information, user privacy may be at risk. Therefore, access management rules that preserve user privacy are important. State-of-the-art methods for acoustic pairing of devices provide fingerprinting based on the time-frequency representation of the acoustic signal and error-correction. We propose to use such acoustic fingerprinting to authorise devices which are acoustically close. We aim to obtain fingerprints of ambient audio adapted to the requirements of voice user interfaces. Our experiments show that the responsiveness and robustness is improved by combining overlapping windows and decorrelating transforms.},\n  keywords = {acoustic signal processing;audio signal processing;authorisation;data privacy;decorrelation;time-frequency analysis;user interfaces;decorrelating time-frequency modelling;voice user interfaces;natural interaction;audio quality;unified voice user interface;voice-related information;user privacy;access management rules;acoustic pairing;time-frequency representation;acoustic signal;acoustic fingerprinting;robustness;Acoustics;Signal to noise ratio;Delays;Decorrelation;Noise measurement;Time-frequency analysis;User interfaces;Voice User Interface;Acoustic Pairing;Audio Fingerprint;DCT},\n  doi = {10.23919/EUSIPCO.2019.8903125},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532472.pdf},\n}\n\n
\n
\n\n\n
\n Voice user interfaces have increased in popularity, as they enable natural interaction with different applications using one's voice. To improve their usability and audio quality, several devices could interact to provide a unified voice user interface. However, with devices cooperating and sharing voice-related information, user privacy may be at risk. Therefore, access management rules that preserve user privacy are important. State-of-the-art methods for acoustic pairing of devices provide fingerprinting based on the time-frequency representation of the acoustic signal and error-correction. We propose to use such acoustic fingerprinting to authorise devices which are acoustically close. We aim to obtain fingerprints of ambient audio adapted to the requirements of voice user interfaces. Our experiments show that the responsiveness and robustness is improved by combining overlapping windows and decorrelating transforms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Combining Evidences from Variable Teager Energy Source and Mel Cepstral Features for Classification of Normal vs. Pathological Voices.\n \n \n \n \n\n\n \n Patil, H. A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CombiningPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903126,\n  author = {H. A. Patil},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Combining Evidences from Variable Teager Energy Source and Mel Cepstral Features for Classification of Normal vs. Pathological Voices},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, novel Variable length Teager Energy Operator (VTEO) based Mel frequency cepstral coefficients, namely, VTMFCC are proposed for automatic classification of normalvs. pathological voices. Experiments have been carried out using this proposed feature set, Mel Frequency Cepstral Coefficients (MFCC), and their score-level fusion. Classification was primarily performed using a discriminatively-trained 2nd order polynomial classifier on a subset of the MEEI database for a feature dimension of 12. The equal error rate (EER) on fusion was reduced by 3.2% than MFCC alone which was used as the baseline. The classification accuracy was analyzed for different dimensions of feature vector. Furthermore, results obtained for the 2nd order classifier were compared with the results obtained from the 3rd order polynomial classifier for different feature dimensions. In addition, the effectiveness of dynamic features, in particular, delta, delta-delta, and shifted delta cepstral features have been investigated for this particular problem. It has been observed that the score-level fusion (with equal weights) of proposed feature set and state-of-the-art MFCC gave better classification performance than MFCC alone for various evaluation factors considered in this paper.},\n  keywords = {cepstral analysis;feature extraction;polynomials;signal classification;speech processing;speech recognition;feature vector;polynomial classifier;delta cepstral features;score-level fusion;MFCC;pathological voices;Mel frequency cepstral coefficients;automatic classification;equal error rate;variable Teager energy source;Mel frequency cepstral coefficient;Pathology;Databases;Training;Testing;Cepstrum;Pathological voice;nonlinearity;VTEO;glottal closure instant (GCI);VTMFCC;polynomial classifier},\n  doi = {10.23919/EUSIPCO.2019.8903126},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533778.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, novel Variable length Teager Energy Operator (VTEO) based Mel frequency cepstral coefficients, namely, VTMFCC are proposed for automatic classification of normalvs. pathological voices. Experiments have been carried out using this proposed feature set, Mel Frequency Cepstral Coefficients (MFCC), and their score-level fusion. Classification was primarily performed using a discriminatively-trained 2nd order polynomial classifier on a subset of the MEEI database for a feature dimension of 12. The equal error rate (EER) on fusion was reduced by 3.2% than MFCC alone which was used as the baseline. The classification accuracy was analyzed for different dimensions of feature vector. Furthermore, results obtained for the 2nd order classifier were compared with the results obtained from the 3rd order polynomial classifier for different feature dimensions. In addition, the effectiveness of dynamic features, in particular, delta, delta-delta, and shifted delta cepstral features have been investigated for this particular problem. It has been observed that the score-level fusion (with equal weights) of proposed feature set and state-of-the-art MFCC gave better classification performance than MFCC alone for various evaluation factors considered in this paper.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Labeler-hot Detection of EEG Epileptic Transients.\n \n \n \n \n\n\n \n Czekaj, L.; Ziembla, W.; Jezierski, P.; Swiniarski, P.; Kolodziejak, A.; Ogniewski, P.; Niedbalski, P.; Jezierska, A.; and Wesierski, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Labeler-hotPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903127,\n  author = {L. Czekaj and W. Ziembla and P. Jezierski and P. Swiniarski and A. Kolodziejak and P. Ogniewski and P. Niedbalski and A. Jezierska and D. Wesierski},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Labeler-hot Detection of EEG Epileptic Transients},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Preventing early progression of epilepsy and so the severity of seizures requires effective diagnosis. Epileptic transients indicate the ability to develop seizures but humans overlook such brief events in an electroencephalogram (EEG) what compromises patient treatment. Traditionally, training of the EEG event detection algorithms has relied on ground truth labels, obtained from the consensus of the majority of labelers. In this work, we go beyond labeler consensus on EEG data. Our event descriptor integrates EEG signal features with one-hot encoded labeler category that is a key to improved generalization performance. Notably, boosted decision trees take advantage of singly-labeled but more varied training sets. Our quantitative experiments show the proposed labeler-hot epileptic event detector consistently outperforms a consensustrained detector and maintains confidence bounds of the detection. The results on our infant EEG recordings suggest datasets can gain higher event variety faster and thus better performance by shifting available human effort from consensusoriented to separate labeling when labels include both, the event and the labeler category.},\n  keywords = {decision trees;electroencephalography;medical disorders;medical signal detection;medical signal processing;infant EEG recordings;EEG signal features;electroencephalogram;one-hot encoded labeler category;event descriptor;labeler consensus;ground truth labels;EEG event detection algorithms;patient treatment;seizures;effective diagnosis;EEG epileptic transients;labeler-hot detection;labeler-hot epileptic event detector;varied training sets;boosted decision trees;Electroencephalography;Training;Labeling;Microsoft Windows;Epilepsy;Detectors;Annotations},\n  doi = {10.23919/EUSIPCO.2019.8903127},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533380.pdf},\n}\n\n
\n
\n\n\n
\n Preventing early progression of epilepsy and so the severity of seizures requires effective diagnosis. Epileptic transients indicate the ability to develop seizures but humans overlook such brief events in an electroencephalogram (EEG) what compromises patient treatment. Traditionally, training of the EEG event detection algorithms has relied on ground truth labels, obtained from the consensus of the majority of labelers. In this work, we go beyond labeler consensus on EEG data. Our event descriptor integrates EEG signal features with one-hot encoded labeler category that is a key to improved generalization performance. Notably, boosted decision trees take advantage of singly-labeled but more varied training sets. Our quantitative experiments show the proposed labeler-hot epileptic event detector consistently outperforms a consensustrained detector and maintains confidence bounds of the detection. The results on our infant EEG recordings suggest datasets can gain higher event variety faster and thus better performance by shifting available human effort from consensusoriented to separate labeling when labels include both, the event and the labeler category.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Lossless Image Coding Exploiting Local and Non-local Information via Probability Model Optimization.\n \n \n \n \n\n\n \n Unno, K.; Kameda, Y.; Matsuda, I.; Itoh, S.; and Naito, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"LosslessPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903128,\n  author = {K. Unno and Y. Kameda and I. Matsuda and S. Itoh and S. Naito},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Lossless Image Coding Exploiting Local and Non-local Information via Probability Model Optimization},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We previously proposed a lossless image coding method based on examples search and probability model optimization. In this paper, we improve coding efficiency of the method by introducing an adaptive prediction technique. Specifically, multiple affine predictors are trained pel-by-pel by using causal neighbor pels, and the predicted values obtained by those predictors are used for generating the probability model. Therefore, both non-local information by the examples search and local information by the adaptive prediction are used together in the probability modeling. Furthermore, an optimization method for the number of examples is also proposed in this paper. Experimental results show that the proposed method achieves better coding rates than the state-of-the-art lossless coding schemes.},\n  keywords = {image coding;optimisation;probability;nonlocal information;probability model optimization;lossless image coding method;adaptive prediction technique;multiple affine predictors;probability modeling;optimization method;pel-by-pel training;lossless coding schemes;Lossless coding;Example search;Affine prediction;Probability modeling;Quasi-Newton method},\n  doi = {10.23919/EUSIPCO.2019.8903128},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529561.pdf},\n}\n\n
\n
\n\n\n
\n We previously proposed a lossless image coding method based on examples search and probability model optimization. In this paper, we improve coding efficiency of the method by introducing an adaptive prediction technique. Specifically, multiple affine predictors are trained pel-by-pel by using causal neighbor pels, and the predicted values obtained by those predictors are used for generating the probability model. Therefore, both non-local information by the examples search and local information by the adaptive prediction are used together in the probability modeling. Furthermore, an optimization method for the number of examples is also proposed in this paper. Experimental results show that the proposed method achieves better coding rates than the state-of-the-art lossless coding schemes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Novel Design Method for Digital FIR/IIR Filters Based on the Shuffle Frog-Leaping Algorithm.\n \n \n \n \n\n\n \n Jiménez-Galindo, D.; Casaseca-de-la-Higuera, P.; and San-José-Revuelta, L. M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903129,\n  author = {D. Jiménez-Galindo and P. Casaseca-de-la-Higuera and L. M. San-José-Revuelta},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Novel Design Method for Digital FIR/IIR Filters Based on the Shuffle Frog-Leaping Algorithm},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The design of both FIR and IIR digital filters is a multi-variable optimization problem, where traditional algorithms fail to obtain optimal solutions. A modified Shuffled Frog Leaping Algorithm (SFLA) is here proposed for the design of FIR and IIR discrete-time filters as close as possible to the desired filter frequency response. This algorithm can be considered a type of memetic algorithm. In this paper, simulations prove the obtained filters outperform those designed using the traditional bilinear Z transform (BZT) method with elliptic approximation. Besides, results are close to, and even slightly better, than those reported in recent bio-inspired approaches using algorithms such as particle swarm optimization (PSO), differential evolution (DE) and regularized global optimization (RGA).},\n  keywords = {digital filters;evolutionary computation;FIR filters;frequency response;IIR filters;particle swarm optimisation;multivariable optimization problem;optimal solutions;modified shuffled frog leaping algorithm;filter frequency response;memetic algorithm;differential evolution;global optimization;digital FIR filters;digital IIR filters;Signal processing algorithms;Finite impulse response filters;Memetics;Convergence;Approximation algorithms;Sociology;FIR design;IIR design;shuffled flog-leaping algorithm;memetic algorithm;bio-inspired algorithm},\n  doi = {10.23919/EUSIPCO.2019.8903129},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528418.pdf},\n}\n\n
\n
\n\n\n
\n The design of both FIR and IIR digital filters is a multi-variable optimization problem, where traditional algorithms fail to obtain optimal solutions. A modified Shuffled Frog Leaping Algorithm (SFLA) is here proposed for the design of FIR and IIR discrete-time filters as close as possible to the desired filter frequency response. This algorithm can be considered a type of memetic algorithm. In this paper, simulations prove the obtained filters outperform those designed using the traditional bilinear Z transform (BZT) method with elliptic approximation. Besides, results are close to, and even slightly better, than those reported in recent bio-inspired approaches using algorithms such as particle swarm optimization (PSO), differential evolution (DE) and regularized global optimization (RGA).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Ofdm Spectral Precoding With Per-Subcarrier Distortion Constraints.\n \n \n \n \n\n\n \n Hussain, K.; and López-Valcarce, R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OfdmPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903130,\n  author = {K. Hussain and R. López-Valcarce},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Ofdm Spectral Precoding With Per-Subcarrier Distortion Constraints},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Spectral precoding is effective in reducing out-of-band radiation (OBR) in multicarrier systems, but it introduces distortion in the data, requiring a suitable decoder at the receiver. Thus, trading off OBR reduction, implementation complexity, and error rate is of paramount importance. We present a precoder design which minimizes OBR under constraints on the distortion at each individual data subcarrier. By judiciously choosing the distortion profile, the decoding task can be significantly alleviated, with a sizable improvement in terms of implementation complexity with respect to previous designs.},\n  keywords = {decoding;OFDM modulation;precoding;radiocommunication;ofdm spectral precoding;per-subcarrier distortion constraints;out-of-band radiation;multicarrier systems;suitable decoder;OBR reduction;implementation complexity;error rate;precoder design;individual data subcarrier;distortion profile;decoding task;OFDM;Complexity theory;Precoding;Decoding;Receivers;Distortion;Europe;OFDM;spectral precoding;out-of-band radiation;sidelobe suppression},\n  doi = {10.23919/EUSIPCO.2019.8903130},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529565.pdf},\n}\n\n
\n
\n\n\n
\n Spectral precoding is effective in reducing out-of-band radiation (OBR) in multicarrier systems, but it introduces distortion in the data, requiring a suitable decoder at the receiver. Thus, trading off OBR reduction, implementation complexity, and error rate is of paramount importance. We present a precoder design which minimizes OBR under constraints on the distortion at each individual data subcarrier. By judiciously choosing the distortion profile, the decoding task can be significantly alleviated, with a sizable improvement in terms of implementation complexity with respect to previous designs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Statistical Modeling of the Patches DC Component for Low-Frequency Noise Reduction.\n \n \n \n \n\n\n \n Houdard, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"StatisticalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903131,\n  author = {A. Houdard},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Statistical Modeling of the Patches DC Component for Low-Frequency Noise Reduction},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This work is devoted to patch-based image denoising. Assuming an additive white Gaussian noise (AWGN) on patches, we derive corresponding models on centered patches and on their DC components. Then we propose a strategy for improving a given path-based denoiser. Finally, we provide experiments with the recent denoising method HDMI that shows improvement of the denoising quality, particularly for residual low frequency noise.},\n  keywords = {AWGN;image denoising;statistical analysis;statistical modeling;patches DC component;low-frequency noise reduction;patch-based image denoising;additive white Gaussian noise;AWGN;centered patches;DC components;HDMI;denoising quality;residual low frequency noise;Noise reduction;AWGN;Nickel;Noise measurement;Covariance matrices;Symmetric matrices;Europe;patch-based image denoising;noise modeling;low frenquency noise reduction;patch centering},\n  doi = {10.23919/EUSIPCO.2019.8903131},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529405.pdf},\n}\n\n
\n
\n\n\n
\n This work is devoted to patch-based image denoising. Assuming an additive white Gaussian noise (AWGN) on patches, we derive corresponding models on centered patches and on their DC components. Then we propose a strategy for improving a given path-based denoiser. Finally, we provide experiments with the recent denoising method HDMI that shows improvement of the denoising quality, particularly for residual low frequency noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Inhomogeneously Stacked RNN for Recognizing Hand Gestures from Magnetometer Data.\n \n \n \n \n\n\n \n Koch, P.; Dreier, M.; Böhme, M.; Maass, M.; Phan, H.; and Mertins, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"InhomogeneouslyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903132,\n  author = {P. Koch and M. Dreier and M. Böhme and M. Maass and H. Phan and A. Mertins},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Inhomogeneously Stacked RNN for Recognizing Hand Gestures from Magnetometer Data},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Hand gesture recognition systems relying on biosignal data exclusively are mandatory for a variety of applications. In general, these systems have to meet requirements such as affordability, reliability, and mobility. In general, surface electrodes are used to obtain signals caused by the contraction of underlying muscles of the forearm. These data are then used to decode hand gestures. In this work, we evaluate the possibility of replacing the electrodes by magnetometers that are cheap and can be easily implemented in mobile devices. We propose an inhomogeneously stacked recurrent neural network for classifying hand gestures given magnetometer data. The experiments reveal that the comparably small network significantly outperforms state-of-the-art hand gesture recognition systems relying on multi-modal data. Furthermore, the proposed network requires significantly shorter windows and enables a quickly responding classification system. Also, the experiments show that the performance of the proposed system does not vary much between subjects and works outstandingly for amputees.},\n  keywords = {gesture recognition;image classification;magnetometers;mobile computing;recurrent neural nets;magnetometer data;hand gesture recognition systems;multimodal data;stacked RNN;biosignal data;surface electrodes;magnetometers;mobile devices;inhomogeneously stacked recurrent neural network;Computer architecture;Microprocessors;Magnetometers;Training;Nonhomogeneous media;Kernel;Feature extraction;hand movement classification;magnetometer;recurrent neural network;hand prosthesis},\n  doi = {10.23919/EUSIPCO.2019.8903132},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534130.pdf},\n}\n\n
\n
\n\n\n
\n Hand gesture recognition systems relying on biosignal data exclusively are mandatory for a variety of applications. In general, these systems have to meet requirements such as affordability, reliability, and mobility. In general, surface electrodes are used to obtain signals caused by the contraction of underlying muscles of the forearm. These data are then used to decode hand gestures. In this work, we evaluate the possibility of replacing the electrodes by magnetometers that are cheap and can be easily implemented in mobile devices. We propose an inhomogeneously stacked recurrent neural network for classifying hand gestures given magnetometer data. The experiments reveal that the comparably small network significantly outperforms state-of-the-art hand gesture recognition systems relying on multi-modal data. Furthermore, the proposed network requires significantly shorter windows and enables a quickly responding classification system. Also, the experiments show that the performance of the proposed system does not vary much between subjects and works outstandingly for amputees.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Defect Detection From Compressed 3-D Ultrasonic Frequency Measurements.\n \n \n \n \n\n\n \n Semper, S.; Kirchhof, J.; Wagner, C.; Krieg, F.; Roemer, F.; and Del Galdo, G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DefectPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903133,\n  author = {S. Semper and J. Kirchhof and C. Wagner and F. Krieg and F. Roemer and G. {Del Galdo}},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Defect Detection From Compressed 3-D Ultrasonic Frequency Measurements},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we propose a compressed sensing scheme for volumetric synthetic aperture measurements in ultrasonic nondestructive testing. The compression is achieved by limiting the measurement to a subset of the Fourier coefficients of the full measurement data, where we also address the issue of a suitable hardware architecture for the task. We present a theoretic analysis for one of the proposed schemes in terms of the Restricted Isometry Property and derive a scaling law for the lower bound of the number of necessary measurements for stable and efficient recovery. We verify our approach with reconstructions from measurement data of a steel specimen that was compressed synthetically in software. As a side result, our approach yields a variant of the 3-D Synthetic Aperture Focusing Technique which can deal with compressed data.},\n  keywords = {compressed sensing;ultrasonic materials testing;ultrasonic transducers;3-D Synthetic Aperture Focusing Technique;defect detection;compressed 3-D ultrasonic frequency measurements;compressed sensing scheme;volumetric synthetic aperture measurements;ultrasonic nondestructive testing;Fourier coefficients;Restricted Isometry Property;Ultrasonic variables measurement;Transducers;Acoustics;Hardware;Pulse measurements;Frequency-domain analysis;Time measurement;3D ultrasonic imaging;Sparse Signal Recovery;Compressive Sensing;SAFT},\n  doi = {10.23919/EUSIPCO.2019.8903133},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531180.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a compressed sensing scheme for volumetric synthetic aperture measurements in ultrasonic nondestructive testing. The compression is achieved by limiting the measurement to a subset of the Fourier coefficients of the full measurement data, where we also address the issue of a suitable hardware architecture for the task. We present a theoretic analysis for one of the proposed schemes in terms of the Restricted Isometry Property and derive a scaling law for the lower bound of the number of necessary measurements for stable and efficient recovery. We verify our approach with reconstructions from measurement data of a steel specimen that was compressed synthetically in software. As a side result, our approach yields a variant of the 3-D Synthetic Aperture Focusing Technique which can deal with compressed data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Convex Combination of Spline Adaptive Filters.\n \n \n \n \n\n\n \n Scarpiniti, M.; Comminiello, D.; and Uncini, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ConvexPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903134,\n  author = {M. Scarpiniti and D. Comminiello and A. Uncini},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Convex Combination of Spline Adaptive Filters},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we propose an adaptive and convex combination of a recent class of nonlinear adaptive filters in different configurations. The proposed architecture relies on the properties of the adaptive combination of filters which exploits the capabilities of different constituents, thus adaptively providing at least the behavior of the best performing filter. The nonlinear functions involved in the adaptation process are based on spline function interpolation and their shapes can be modified during learning using gradient-based techniques. In addition, we derive a simple form of the adaptation algorithm and present some experimental results that demonstrate the effectiveness of the proposed method.},\n  keywords = {adaptive filters;gradient methods;interpolation;nonlinear filters;nonlinear functions;splines (mathematics);adaptation algorithm;convex combination;spline adaptive filters;nonlinear adaptive filters;performing filter;nonlinear functions;adaptation process;spline function interpolation;learning using gradient-based techniques;Splines (mathematics);Nonlinear filters;Maximum likelihood detection;Adaptation models;Table lookup;Interpolation;Nonlinear adaptive filter;Convex combination;Flexible spline function;Constrained Least Mean Square;System identification.},\n  doi = {10.23919/EUSIPCO.2019.8903134},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532099.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose an adaptive and convex combination of a recent class of nonlinear adaptive filters in different configurations. The proposed architecture relies on the properties of the adaptive combination of filters which exploits the capabilities of different constituents, thus adaptively providing at least the behavior of the best performing filter. The nonlinear functions involved in the adaptation process are based on spline function interpolation and their shapes can be modified during learning using gradient-based techniques. In addition, we derive a simple form of the adaptation algorithm and present some experimental results that demonstrate the effectiveness of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Antenna Controller for Low-latency and High Reliability Robotic Communications over Time-varying Fading Channels.\n \n \n \n \n\n\n \n Licea, D. B.; Ghogho, M.; McLernon, D.; and Zaidi, S. A. R.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AntennaPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903135,\n  author = {D. B. Licea and M. Ghogho and D. McLernon and S. A. R. Zaidi},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Antenna Controller for Low-latency and High Reliability Robotic Communications over Time-varying Fading Channels},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper we consider the problem in which a single antenna mobile robot (MR) is required to maintain reliable communication with a base station (BS) while following a given trajectory. The wireless communication channel is assumed to experience time-varying small-scale fading with time-varying coherence time. To achieve high reliability and low latency of the communication link, compensation for the small-scale fading is required. Since the MR has to stay on the predefined trajectory, we propose to compensate for the fading by continuously adjusting the position of the antenna on a revolving platform on-board the MR. This is done by a closed-loop antenna controller which optimises the position of the antenna at all times without modifying the trajectory to be followed by the MR. Results show that this technique can effectively compensate for the small-scale fading without introducing delays in data reception.},\n  keywords = {antenna arrays;antennas;delays;fading channels;loop antennas;mobile radio;mobile robots;time-varying channels;wireless channels;closed-loop antenna controller;predefined trajectory;communication link;time-varying coherence time;time-varying small-scale fading;wireless communication channel;base station;reliable communication;single antenna mobile robot;time-varying fading channels;high reliability robotic communications;low-latency;MR;Antennas;Fading channels;Trajectory;Wireless communication;Coherence time;Mathematical model;Mobility diversity;small-scale fading;antenna controller;robotics},\n  doi = {10.23919/EUSIPCO.2019.8903135},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533949.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we consider the problem in which a single antenna mobile robot (MR) is required to maintain reliable communication with a base station (BS) while following a given trajectory. The wireless communication channel is assumed to experience time-varying small-scale fading with time-varying coherence time. To achieve high reliability and low latency of the communication link, compensation for the small-scale fading is required. Since the MR has to stay on the predefined trajectory, we propose to compensate for the fading by continuously adjusting the position of the antenna on a revolving platform on-board the MR. This is done by a closed-loop antenna controller which optimises the position of the antenna at all times without modifying the trajectory to be followed by the MR. Results show that this technique can effectively compensate for the small-scale fading without introducing delays in data reception.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Real-Time Hand Gesture Recognition Model Using Deep Learning Techniques and EMG Signals.\n \n \n \n \n\n\n \n Chung, E. A.; and Benalcázar, M. E.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Real-TimePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903136,\n  author = {E. A. Chung and M. E. Benalcázar},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Real-Time Hand Gesture Recognition Model Using Deep Learning Techniques and EMG Signals},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Gesture recognition has multiple applications in medicine, engineering and robotics. It also allows us to develop new and more natural approaches to human-machine interaction. Real-time hand gesture recognition consists of identifying, with no perceivable delay, a given gesture performed by the hand at any moment. In this paper, we propose a model for real-time hand gesture recognition. The proposed model takes as input electromyographic (EMG) signals measured on the forearm, using the commercial sensor Myo Armband. We use an autoencoder for automatic feature extraction, and an artificial feed-forward neural network for classification. The proposed model can recognize the same 5 gestures as the proprietary recognition system of the Myo Armband, achieving an average recognition accuracy of 85.08% ± 15.21%, with an average response time of 3 ± 1 ms. The proposed model is general, which implies that it can recognize the gestures from any user, even when his\\her data are not included in the training dataset. Finally, for reproducing this work, we make publicly available the code that implements the proposed model.},\n  keywords = {electromyography;feature extraction;feedforward neural nets;gesture recognition;human computer interaction;learning (artificial intelligence);medical signal processing;deep learning techniques;EMG signals;real-time hand gesture recognition model;proprietary recognition system;average response time;automatic feature extraction;artificial feed-forward neural network;Electromyography;Gesture recognition;Training;Microsoft Windows;Robot sensing systems;Artificial neural networks;Real-time systems;hand gesture recognition;automatic feature extraction;artificial neural networks;autoencoders},\n  doi = {10.23919/EUSIPCO.2019.8903136},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533724.pdf},\n}\n\n
\n
\n\n\n
\n Gesture recognition has multiple applications in medicine, engineering and robotics. It also allows us to develop new and more natural approaches to human-machine interaction. Real-time hand gesture recognition consists of identifying, with no perceivable delay, a given gesture performed by the hand at any moment. In this paper, we propose a model for real-time hand gesture recognition. The proposed model takes as input electromyographic (EMG) signals measured on the forearm, using the commercial sensor Myo Armband. We use an autoencoder for automatic feature extraction, and an artificial feed-forward neural network for classification. The proposed model can recognize the same 5 gestures as the proprietary recognition system of the Myo Armband, achieving an average recognition accuracy of 85.08% ± 15.21%, with an average response time of 3 ± 1 ms. The proposed model is general, which implies that it can recognize the gestures from any user, even when his\\her data are not included in the training dataset. Finally, for reproducing this work, we make publicly available the code that implements the proposed model.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Edge and Cloud-aided Secure Sparse Representation for Face Recognition.\n \n \n \n \n\n\n \n Wang, Y.; Nakachi, T.; and Ishihara, H.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EdgePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903137,\n  author = {Y. Wang and T. Nakachi and H. Ishihara},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Edge and Cloud-aided Secure Sparse Representation for Face Recognition},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Edge and cloud computing has recently emerged not only to meet the ever-increasing computation demands, but also to provide extra degree of diversity by collecting data from the mobile devices in service. However, this, in turn, has raised new technical challenges on the security issue, and calls for the design of new frameworks to exploit multi-device diversity. In this paper, we take the advantage of this benefit while preserving the privacy. Specifically, 1). To address the privacy issue, we develop a low-complexity encrypting algorithm based on random unitary transform, where it is proved both theoretically and through simulation that such encryption will not affect the result of face recognition. 2). To exploit multi-device diversity, we integrate the recognition results based on the dictionaries of each device into an aggregated output through ensemble learning, which has shown higher correctness of predictability than any individual methods. The designed framework not only contributes to the reduction of computation complexity at each device, but also proves to be effective and robust through simulation results.},\n  keywords = {cloud computing;computational complexity;cryptography;data privacy;face recognition;learning (artificial intelligence);multidevice diversity;designed framework;computation complexity;face recognition;cloud computing;computation demands;mobile devices;security issue;privacy issue;low-complexity encrypting algorithm;cloud-aided secure sparse representation;Face recognition;Cloud computing;Cryptography;Dictionaries;Mobile handsets;Servers;Privacy},\n  doi = {10.23919/EUSIPCO.2019.8903137},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532325.pdf},\n}\n\n
\n
\n\n\n
\n Edge and cloud computing has recently emerged not only to meet the ever-increasing computation demands, but also to provide extra degree of diversity by collecting data from the mobile devices in service. However, this, in turn, has raised new technical challenges on the security issue, and calls for the design of new frameworks to exploit multi-device diversity. In this paper, we take the advantage of this benefit while preserving the privacy. Specifically, 1). To address the privacy issue, we develop a low-complexity encrypting algorithm based on random unitary transform, where it is proved both theoretically and through simulation that such encryption will not affect the result of face recognition. 2). To exploit multi-device diversity, we integrate the recognition results based on the dictionaries of each device into an aggregated output through ensemble learning, which has shown higher correctness of predictability than any individual methods. The designed framework not only contributes to the reduction of computation complexity at each device, but also proves to be effective and robust through simulation results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Identification of Alzheimer’s Disease using Non-linguistic Audio Descriptors.\n \n \n \n\n\n \n Bhat, C.; and Kopparapu, S. K.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903138,\n  author = {C. Bhat and S. K. Kopparapu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Identification of Alzheimer’s Disease using Non-linguistic Audio Descriptors},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Dementia is an overall term used to describe the reduced cognitive functioning in human beings, that is severe enough to impact their daily activities. Early diagnosis of dementia is imperative to provide timely treatment, either medication or therapy to alleviate the effects and sometimes slow the progression of dementia. In this work, we use speech processing and machine learning techniques to automatically classify speech into (a) healthy (HC) (b) with mild cognitive impairment (MCI) or (c) with Alzheimer's disease (AD). Only acoustic non-linguistic parameters are used for this purpose, making this a language independent approach. We evaluate our work using dementia and healthy speech from Pitt corpus of DementiaBank database. The performance of a three class Random Forest classifier is compared with our system comprising multiple two-class Random Forest classifiers cascaded to form a three class classifier, wherein a combination of approximate posterior probabilities is used to obtain a final class probability estimate. additional, patient speech is classified at segment level as well as at overall conversation level. Post processing on the patient speech classification at segment level provides a classification accuracy of 82% which is a significant absolute improvement of 8% over a simple three-class classifier performance.},\n  keywords = {cognition;diseases;feature extraction;probability;signal classification;speech processing;support vector machines;Alzheimer's disease;nonlinguistic audio descriptors;dementia;reduced cognitive functioning;human beings;daily activities;early diagnosis;speech processing;nonlinguistic parameters;language independent approach;healthy speech;class Random Forest classifier;final class probability estimate;segment level;patient speech classification;three-class classifier performance;multiple two-class random forest classifiers;Dementia;Feature extraction;Acoustics;Random forests;Task analysis;Databases;Alzheimer’s disease;Dementia;classification;feature selection},\n  doi = {10.23919/EUSIPCO.2019.8903138},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Dementia is an overall term used to describe the reduced cognitive functioning in human beings, that is severe enough to impact their daily activities. Early diagnosis of dementia is imperative to provide timely treatment, either medication or therapy to alleviate the effects and sometimes slow the progression of dementia. In this work, we use speech processing and machine learning techniques to automatically classify speech into (a) healthy (HC) (b) with mild cognitive impairment (MCI) or (c) with Alzheimer's disease (AD). Only acoustic non-linguistic parameters are used for this purpose, making this a language independent approach. We evaluate our work using dementia and healthy speech from Pitt corpus of DementiaBank database. The performance of a three class Random Forest classifier is compared with our system comprising multiple two-class Random Forest classifiers cascaded to form a three class classifier, wherein a combination of approximate posterior probabilities is used to obtain a final class probability estimate. additional, patient speech is classified at segment level as well as at overall conversation level. Post processing on the patient speech classification at segment level provides a classification accuracy of 82% which is a significant absolute improvement of 8% over a simple three-class classifier performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Power Distribution Insulators Classification Using Image Hybrid Deep Learning.\n \n \n \n \n\n\n \n Filho, E. F. S.; Prates, R. M.; Ramos, R. P.; and Cardoso, J. S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"PowerPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903139,\n  author = {E. F. S. Filho and R. M. Prates and R. P. Ramos and J. S. Cardoso},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Power Distribution Insulators Classification Using Image Hybrid Deep Learning},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The Overhead Power Distribution Lines present a wide range of insulator components, which have different shapes and types of building materials. These components are usually exposed to weather and operational conditions that may cause deviations in their shapes, colors or textures. These changes might hinder the development of automatic systems for visual inspection. In this perspective, this work presents a robust methodology for image classification, which aims at the efficient distribution insulator class identification, regardless of its degradation level. This work can be characterized by the following steps: implementation of Convolutional Neural Network (CNN); transfer learning; attribute vector acquisition and design of hybrid classifier architectures to improve the discrimination efficiency. In summary, a previously trained CNN goes through a fine tuning stage for later use as a feature extractor for training a new set of classifiers. A comparative study was conducted to identify which classifier architecture obtained the best discrimination performance for non-conforming components. The proposed methodology showed a significant improvement in classification performance, obtaining 95% overall accuracy in the identification of non-conforming component classes.},\n  keywords = {convolutional neural nets;feature extraction;image classification;inspection;insulators;learning (artificial intelligence);power distribution lines;power engineering computing;power overhead lines;fine tuning stage;nonconforming components;image hybrid deep learning;overhead power distribution lines;insulator components;building materials;operational conditions;automatic systems;visual inspection;image classification performance;degradation level;convolutional neural network;transfer learning;attribute vector acquisition;hybrid classifier architectures;power distribution insulators classification;distribution insulator class identification;feature extractor;Insulators;Training;Vegetation;Inspection;Support vector machines;Visualization;Convolutional neural networks;distribution insulators;convolutional neural network (CNN);transfer learning;hybrid classifiers},\n  doi = {10.23919/EUSIPCO.2019.8903139},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533893.pdf},\n}\n\n
\n
\n\n\n
\n The Overhead Power Distribution Lines present a wide range of insulator components, which have different shapes and types of building materials. These components are usually exposed to weather and operational conditions that may cause deviations in their shapes, colors or textures. These changes might hinder the development of automatic systems for visual inspection. In this perspective, this work presents a robust methodology for image classification, which aims at the efficient distribution insulator class identification, regardless of its degradation level. This work can be characterized by the following steps: implementation of Convolutional Neural Network (CNN); transfer learning; attribute vector acquisition and design of hybrid classifier architectures to improve the discrimination efficiency. In summary, a previously trained CNN goes through a fine tuning stage for later use as a feature extractor for training a new set of classifiers. A comparative study was conducted to identify which classifier architecture obtained the best discrimination performance for non-conforming components. The proposed methodology showed a significant improvement in classification performance, obtaining 95% overall accuracy in the identification of non-conforming component classes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Block Sparsity-Based DOA Estimation with Sensor Gain and Phase Uncertainties.\n \n \n \n \n\n\n \n Huang, H.; Fauß, M.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"BlockPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903140,\n  author = {H. Huang and M. Fauß and A. M. Zoubir},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Block Sparsity-Based DOA Estimation with Sensor Gain and Phase Uncertainties},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper investigates the problem of direction-of-70arrival (DOA) estimation in the presence of unknown sensor gain and phase uncertainties. A novel method based on a block sparse representation is proposed to estimate the directions of sources. The data model is constructed under the framework of block sparse signal representation. Then, a convex problem is formulated to find the directions of the incident signals, and the problem can be solved using the L1-SVD algorithm. Unlike the existing eigenstructure-based methods and other sparsity-based methods which require appropriate initial values of the unknown sensor gain and phase errors for iterating between unknown sensor errors and angles of sources, the proposed block sparsity-based DOA estimation technique does not need any prior knowledge about the array errors. Numerical simulations exhibit the effectiveness and superiority of the proposed method.},\n  keywords = {array signal processing;direction-of-arrival estimation;eigenvalues and eigenfunctions;matrix algebra;signal representation;singular value decomposition;direction-of-arrival estimation;phase uncertainties;block sparse representation;data model;block sparse signal representation;convex problem;incident signals;L1-SVD algorithm;eigenstructure-based methods;sparsity-based methods;phase errors;sensor errors;block sparsity-based DOA estimation technique;Direction-of-arrival estimation;Estimation;Signal to noise ratio;Uncertainty;Sensor arrays;Signal processing algorithms;DOA estimation;sensor gain and phase error;block sparse representation},\n  doi = {10.23919/EUSIPCO.2019.8903140},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533809.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates the problem of direction-of-70arrival (DOA) estimation in the presence of unknown sensor gain and phase uncertainties. A novel method based on a block sparse representation is proposed to estimate the directions of sources. The data model is constructed under the framework of block sparse signal representation. Then, a convex problem is formulated to find the directions of the incident signals, and the problem can be solved using the L1-SVD algorithm. Unlike the existing eigenstructure-based methods and other sparsity-based methods which require appropriate initial values of the unknown sensor gain and phase errors for iterating between unknown sensor errors and angles of sources, the proposed block sparsity-based DOA estimation technique does not need any prior knowledge about the array errors. Numerical simulations exhibit the effectiveness and superiority of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Improving FISTA’s Speed of Convergence via a Novel Inertial Sequence.\n \n \n \n\n\n \n Rodriguez, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903141,\n  author = {P. Rodriguez},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Improving FISTA’s Speed of Convergence via a Novel Inertial Sequence},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The FISTA (fast iterative shrinkage-thresholding algorithm) is a well-known and fast (theoretical O(k-2) rate of convergence) procedure for solving optimization problems composed by the sum of two convex functions, such that one is smooth (differentiable) and the other is possible nonsmooth. FISTA can be understood as a first order method with one important aspect: it uses a suitable extragradient rule, i.e.: the gradient is evaluated at a linear combination of the past two iterates, whose weights, are usually referred to as the inertial sequence. While problem dependent, it has a direct impact on the FISTA's practical computational performance. In this paper we propose a novel inertial sequence; when compared to well-established alternative choices, in the context of convolutional sparse coding and Wavelet-based inpainting, our proposed inertial sequence can reduce the number of FISTA's global iterations (and thus overall computational time) by 30% ~ 50% to attain the same level of reduction in the cost functional.},\n  keywords = {gradient methods;image restoration;image segmentation;image sequences;optimisation;wavelet transforms;fast iterative shrinkage-thresholding algorithm;optimization problems;convex functions;extragradient rule;inertial sequence;FISTA global iterations;FISTA speed improvement;wavelet-based inpainting;convolutional sparse coding;Convolutional codes;Convolution;Convergence;Acceleration;Iterative algorithms;Encoding;Gradient methods;FISTA;inertial sequence;proximal gradient method;convolutional sparse coding;Wavelet-based inpainting},\n  doi = {10.23919/EUSIPCO.2019.8903141},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The FISTA (fast iterative shrinkage-thresholding algorithm) is a well-known and fast (theoretical O(k-2) rate of convergence) procedure for solving optimization problems composed by the sum of two convex functions, such that one is smooth (differentiable) and the other is possible nonsmooth. FISTA can be understood as a first order method with one important aspect: it uses a suitable extragradient rule, i.e.: the gradient is evaluated at a linear combination of the past two iterates, whose weights, are usually referred to as the inertial sequence. While problem dependent, it has a direct impact on the FISTA's practical computational performance. In this paper we propose a novel inertial sequence; when compared to well-established alternative choices, in the context of convolutional sparse coding and Wavelet-based inpainting, our proposed inertial sequence can reduce the number of FISTA's global iterations (and thus overall computational time) by 30%   50% to attain the same level of reduction in the cost functional.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Discriminative Joint Vector And Component Reduction For Gaussian Mixture Models.\n \n \n \n \n\n\n \n Bar-Yosef, Y.; and Bistritz, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DiscriminativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903142,\n  author = {Y. Bar-Yosef and Y. Bistritz},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Discriminative Joint Vector And Component Reduction For Gaussian Mixture Models},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We introduce a discriminative parametric vector dimensionality reduction algorithm for Gaussian mixtures that is performed jointly with mixture component reduction. The reduction algorithm is based on the variational maximum mutual information (VMMI) method, which in contrast to other reduction algorithms, requires only the parameters of existing high order and high dimensional mixture models. The idea behind the proposed approach, called JVC-VMMI (for joint vector and component VMMI), differs significantly from traditional classification approaches that perform separately dimensionality reduction first, and then use the low-dimensional feature vector for training lower order models. The fact that the JVC-VMMI approach is relieved from using the original data samples admits an extremely efficient computation of the reduced models optimized for the classification task. We report experiments in vowel classification in which JVC-VMMI outperformed conventional Linear Discriminant Analysis (LDA) and Neighborhood Component Analysis (NCA) dimensionality reduction methods.},\n  keywords = {feature extraction;Gaussian processes;learning (artificial intelligence);linear discriminant analysis;mixture models;optimisation;pattern classification;discriminative parametric vector dimensionality reduction algorithm;Gaussian mixtures;mixture component reduction;variational maximum mutual information method;high dimensional mixture models;low-dimensional feature vector;JVC-VMMI approach;linear discriminant analysis;discriminative joint vector;Gaussian mixture models;neighborhood component analysis dimensionality reduction methods;classification approaches;joint vector-and-component VMMI;Optimization;Computational modeling;Mixture models;Dimensionality reduction;Mutual information;Mathematical model;Training;Dimensionality reduction;Gaussian mixture models;Discriminative learning;Hierarchical clustering},\n  doi = {10.23919/EUSIPCO.2019.8903142},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529375.pdf},\n}\n\n
\n
\n\n\n
\n We introduce a discriminative parametric vector dimensionality reduction algorithm for Gaussian mixtures that is performed jointly with mixture component reduction. The reduction algorithm is based on the variational maximum mutual information (VMMI) method, which in contrast to other reduction algorithms, requires only the parameters of existing high order and high dimensional mixture models. The idea behind the proposed approach, called JVC-VMMI (for joint vector and component VMMI), differs significantly from traditional classification approaches that perform separately dimensionality reduction first, and then use the low-dimensional feature vector for training lower order models. The fact that the JVC-VMMI approach is relieved from using the original data samples admits an extremely efficient computation of the reduced models optimized for the classification task. We report experiments in vowel classification in which JVC-VMMI outperformed conventional Linear Discriminant Analysis (LDA) and Neighborhood Component Analysis (NCA) dimensionality reduction methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficiency of the Memory Polynomial Model in Realizing Digital Twins for Gait Assessment.\n \n \n \n \n\n\n \n Alcaraz, J. C.; Moghaddamnia, S.; Fuhrwerk, M.; and Peissig, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EfficiencyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903143,\n  author = {J. C. Alcaraz and S. Moghaddamnia and M. Fuhrwerk and J. Peissig},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficiency of the Memory Polynomial Model in Realizing Digital Twins for Gait Assessment},\n  year = {2019},\n  pages = {1-5},\n  abstract = {One of the key issues of multi-sensory digital healthcare and therapy is the reliability and user compliance of the applied sensor system. In the context of digital gait analysis and rehabilitation, different technologies have been proposed allowing objective gait assessment and precise quantification of the rehabilitation progress using Inertial Measurement Unit (IMU) platforms. However, this depends largely on the estimation accuracy of the kinematics (body joint angles). This paper presents the concept of a digital equivalent based on the Memory Polynomial Model (MPM) to reduce the number of IMUs needed for the measurements and to simulate the physical mechanism of lower body joint angles based on acceleration data. The MPM parameter estimation is based on the Least Square (LS) approach and is performed using accelerometer records of non-pathological gait patterns. The Normalized Mean Square Error (NMSE) is used to evaluate the performance of the model. According to the results an NMSE of -20 dB is achieved, which indicates the great potential of applying the MPM to develop a digital twin. That kind of twin can serve as a prototype of the physical counterpart to improve the wearability of the sensor system and to reduce physically induced measurement errors as well.},\n  keywords = {accelerometers;gait analysis;kinematics;least squares approximations;mean square error methods;measurement errors;parameter estimation;patient rehabilitation;polynomials;memory polynomial model;digital twin;reliability;user compliance;sensor system;digital gait analysis;objective gait assessment;precise quantification;rehabilitation progress;inertial measurement unit;estimation accuracy;kinematics;physical mechanism;lower body joint angles;MPM parameter estimation;accelerometer;acceleration data;multisensory digital healthcare;physically induced measurement errors;NMSE;normalized mean square error;nonpathological gait patterns;least square approach;Mathematical model;Estimation;Computational modeling;Signal processing;Hip;Digital twin;Accelerometers;Gait rehabilitation;Nonlinear time-varying modeling;IMU;Multi-sensor integration;Digital twin;Machine learning},\n  doi = {10.23919/EUSIPCO.2019.8903143},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533335.pdf},\n}\n\n
\n
\n\n\n
\n One of the key issues of multi-sensory digital healthcare and therapy is the reliability and user compliance of the applied sensor system. In the context of digital gait analysis and rehabilitation, different technologies have been proposed allowing objective gait assessment and precise quantification of the rehabilitation progress using Inertial Measurement Unit (IMU) platforms. However, this depends largely on the estimation accuracy of the kinematics (body joint angles). This paper presents the concept of a digital equivalent based on the Memory Polynomial Model (MPM) to reduce the number of IMUs needed for the measurements and to simulate the physical mechanism of lower body joint angles based on acceleration data. The MPM parameter estimation is based on the Least Square (LS) approach and is performed using accelerometer records of non-pathological gait patterns. The Normalized Mean Square Error (NMSE) is used to evaluate the performance of the model. According to the results an NMSE of -20 dB is achieved, which indicates the great potential of applying the MPM to develop a digital twin. That kind of twin can serve as a prototype of the physical counterpart to improve the wearability of the sensor system and to reduce physically induced measurement errors as well.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive Beamforming Based on Time Modulated Array with Harmonic Characteristic Analysis.\n \n \n \n \n\n\n \n He, C.; Yi, G.; and Chen, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903144,\n  author = {C. He and G. Yi and J. Chen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive Beamforming Based on Time Modulated Array with Harmonic Characteristic Analysis},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Adaptive beamforming based on the time modulated array (TMA) is proposed and verified, which exploits the characteristic of fundamental and harmonic components after the time modulation. By establishing the relationship between time modulation sequences added to RF switches of the TMA and generated harmonic components of received signals, the complex weights needed in adaptive beamforming can be calculated. Then the weights are mapped to time modulation sequences added to RF switches to generate the beam pointing to the received signal at the frequency of the positive first harmonic component. Compared to the adaptive beamforming based on the digital array, the proposed method exploits only a single RF channel, while its main calculation locates the Discrete Fourier Transform (DFT) of several harmonic components. Therefore, the proposed method has a simple structure and a concise signal processing. Numeric simulations are provided to show its effectiveness.},\n  keywords = {array signal processing;discrete Fourier transforms;harmonic analysis;adaptive beamforming;time modulated array;harmonic characteristic;TMA;harmonic components;time modulation sequences;RF switches;received signal;positive first harmonic component;digital array;Harmonic analysis;Array signal processing;Modulation;Radio frequency;Power harmonic filters;Manifolds;adaptive beamforming;time modulation;discrete Fourier transform;harmonic analysis;pattern synthesis},\n  doi = {10.23919/EUSIPCO.2019.8903144},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528928.pdf},\n}\n\n
\n
\n\n\n
\n Adaptive beamforming based on the time modulated array (TMA) is proposed and verified, which exploits the characteristic of fundamental and harmonic components after the time modulation. By establishing the relationship between time modulation sequences added to RF switches of the TMA and generated harmonic components of received signals, the complex weights needed in adaptive beamforming can be calculated. Then the weights are mapped to time modulation sequences added to RF switches to generate the beam pointing to the received signal at the frequency of the positive first harmonic component. Compared to the adaptive beamforming based on the digital array, the proposed method exploits only a single RF channel, while its main calculation locates the Discrete Fourier Transform (DFT) of several harmonic components. Therefore, the proposed method has a simple structure and a concise signal processing. Numeric simulations are provided to show its effectiveness.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Evaluation of Modifications to CPPPSNR in 360° Video Quality Assessment.\n \n \n \n\n\n \n Adhuran, J.; Kulupana, G.; Galkandage, C.; and Fernando, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903145,\n  author = {J. Adhuran and G. Kulupana and C. Galkandage and A. Fernando},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Evaluation of Modifications to CPPPSNR in 360° Video Quality Assessment},\n  year = {2019},\n  pages = {1-5},\n  abstract = {360° videos are represented in spherical projection formats and the video quality of such videos is assessed using spherical objective quality metrics. Furthermore, the objective video quality between two different spherical projection formats can be evaluated using Cross projection metrics. Craster parabola, is a 2D cross projection format which is used by Craster Parabolic Peak Signal-to-Noise Ratio (CPPPSNR) metric. The existing CPPPSNR measurements do not consider the subsampling locations during the quality assessment to match the pixel density of a sphere. Nevertheless, it is vitally important to account for the oversampled projection formats and the sphere in order to be compatible with the existing video encoding architectures. To this end, the proposed improvements to the CPPPSNR locates the subsample points during craster parabolic projection and use nearest neighbor interpolation to assign pixels from the craster parabolic projection. Furthermore, in order to compensate the occurrence of oversampling, appropriate weights are applied to the corresponding pixels. The proposed method was tested with Shanghai Jiao Ton University (SJTU) Virtual Reality (VR) sequences for projection conversion. The comparison between Spherical PSNR (SPSNR) and existing CPPPSNR, validate the proposed CPPPSNR as an objective quality metric for cross projections.},\n  keywords = {interpolation;video signal processing;virtual reality;video quality assessment;spherical objective quality metrics;2D cross projection;Craster parabolic peak signal-to-noise ratio metric;subsampling locations;pixel density;oversampled projection formats;craster parabolic projection;projection conversion;spherical PSNR;objective quality metric;spherical projection formats;CPPPSNR measurements;video encoding architectures;virtual reality sequences;Shanghai Jiao Ton University;SJTU;nearest neighbor interpolation;Measurement;Streaming media;Two dimensional displays;Quality assessment;Encoding;Video recording;Interpolation;360° videos;projection;objective quality;craster parabola;CPPPSNR;ERP;Cubemap;CISP},\n  doi = {10.23919/EUSIPCO.2019.8903145},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n 360° videos are represented in spherical projection formats and the video quality of such videos is assessed using spherical objective quality metrics. Furthermore, the objective video quality between two different spherical projection formats can be evaluated using Cross projection metrics. Craster parabola, is a 2D cross projection format which is used by Craster Parabolic Peak Signal-to-Noise Ratio (CPPPSNR) metric. The existing CPPPSNR measurements do not consider the subsampling locations during the quality assessment to match the pixel density of a sphere. Nevertheless, it is vitally important to account for the oversampled projection formats and the sphere in order to be compatible with the existing video encoding architectures. To this end, the proposed improvements to the CPPPSNR locates the subsample points during craster parabolic projection and use nearest neighbor interpolation to assign pixels from the craster parabolic projection. Furthermore, in order to compensate the occurrence of oversampling, appropriate weights are applied to the corresponding pixels. The proposed method was tested with Shanghai Jiao Ton University (SJTU) Virtual Reality (VR) sequences for projection conversion. The comparison between Spherical PSNR (SPSNR) and existing CPPPSNR, validate the proposed CPPPSNR as an objective quality metric for cross projections.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A New Metric to Evaluate Auditory Attention Detection Performance Based on a Markov Chain.\n \n \n \n \n\n\n \n Geirnaert, S.; Francart, T.; and Bertrand, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903146,\n  author = {S. Geirnaert and T. Francart and A. Bertrand},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A New Metric to Evaluate Auditory Attention Detection Performance Based on a Markov Chain},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Auditory attention detection (AAD) is an essential building block for future generations of `neuro-steered' hearing prostheses. In a multi-speaker scenario, it uses neural recordings to detect to which speaker the listener is attending and assists as such the noise reduction algorithm within the hearing device. Recently, a multitude of these AAD algorithms has been developed, based on electroencephalography (EEG) recordings. With the emergence of AAD algorithms, a standardized way of evaluating these AAD algorithms becomes paramount. However, this is not trivial due to an inherent trade-off between detection delay and accuracy. We propose a new performance metric to evaluate AAD algorithms that resolves this trade-off: the expected switch duration (ESD). The ESD is based on a Markov chain model of an adaptive gain control system in a hearing aid and quantifies the expected time needed to switch its operation from one speaker to another when the attention is switched. We validate the metric on simulated data and show on real EEG recordings that it is an interpretable metric that allows fair comparison between AAD algorithms, combining both the accuracy of the AAD algorithm and the time needed to make a decision.},\n  keywords = {auditory evoked potentials;electroencephalography;hearing;hearing aids;Markov processes;medical signal detection;medical signal processing;neurophysiology;neuro-steered hearing prostheses;noise reduction algorithm;auditory attention detection performance;multi-speaker scenario;Markov chain model;adaptive gain control system;expected switch duration;real EEG recordings;Switches;Markov processes;Hearing aids;Steady-state;Signal processing algorithms;Electroencephalography},\n  doi = {10.23919/EUSIPCO.2019.8903146},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533767.pdf},\n}\n\n
\n
\n\n\n
\n Auditory attention detection (AAD) is an essential building block for future generations of `neuro-steered' hearing prostheses. In a multi-speaker scenario, it uses neural recordings to detect to which speaker the listener is attending and assists as such the noise reduction algorithm within the hearing device. Recently, a multitude of these AAD algorithms has been developed, based on electroencephalography (EEG) recordings. With the emergence of AAD algorithms, a standardized way of evaluating these AAD algorithms becomes paramount. However, this is not trivial due to an inherent trade-off between detection delay and accuracy. We propose a new performance metric to evaluate AAD algorithms that resolves this trade-off: the expected switch duration (ESD). The ESD is based on a Markov chain model of an adaptive gain control system in a hearing aid and quantifies the expected time needed to switch its operation from one speaker to another when the attention is switched. We validate the metric on simulated data and show on real EEG recordings that it is an interpretable metric that allows fair comparison between AAD algorithms, combining both the accuracy of the AAD algorithm and the time needed to make a decision.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Partially Adversarial Learning and Adaptation.\n \n \n \n \n\n\n \n Chien, J. -.; and Lyu, Y. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"PartiallyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903147,\n  author = {J. -T. Chien and Y. -Y. Lyu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Partially Adversarial Learning and Adaptation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {An image classification system for a specific target domain is usually trained with initialization from a source domain given with a large number of classes, particularly in an application of image recognition. The classes in target domain are usually seen as a subset in source domain. Partial domain adaptation aims to tackle this generalization issue where no labeled data are provided in target domain. This paper presents an adversarial learning for partial domain adaptation where a symmetric metric based on the Wasserstein distance is adopted in an adversarial learning objective. We build a Wasserstein partial transfer network where the Wasserstein adversarial objective is jointly optimized to partially transfer the relevance knowledge from source to target domains. The geometric property for optimal transport is assured to mitigate the gradient vanishing problem in adversarial training. The neural network components for feature extraction, relevance transfer, domain matching and task classification are jointly trained by solving a minimax optimization over multiple objectives. Experiments on image classification show the merits of the proposed partially adversarial domain adaptation over different tasks.},\n  keywords = {feature extraction;image classification;image recognition;learning (artificial intelligence);minimax techniques;neural nets;source domain;image recognition;partial domain adaptation;adversarial learning objective;Wasserstein partial transfer network;Wasserstein adversarial objective;domain matching;task classification;partially adversarial domain adaptation;image classification system;target domain;minimax optimization;partially adversarial learning;neural network;feature extraction;relevance transfer;Feature extraction;Generative adversarial networks;Gallium nitride;Task analysis;Training;Optimization;Measurement;image classification;domain adaptation;deep learning;adversarial learning;partial transfer},\n  doi = {10.23919/EUSIPCO.2019.8903147},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532317.pdf},\n}\n\n
\n
\n\n\n
\n An image classification system for a specific target domain is usually trained with initialization from a source domain given with a large number of classes, particularly in an application of image recognition. The classes in target domain are usually seen as a subset in source domain. Partial domain adaptation aims to tackle this generalization issue where no labeled data are provided in target domain. This paper presents an adversarial learning for partial domain adaptation where a symmetric metric based on the Wasserstein distance is adopted in an adversarial learning objective. We build a Wasserstein partial transfer network where the Wasserstein adversarial objective is jointly optimized to partially transfer the relevance knowledge from source to target domains. The geometric property for optimal transport is assured to mitigate the gradient vanishing problem in adversarial training. The neural network components for feature extraction, relevance transfer, domain matching and task classification are jointly trained by solving a minimax optimization over multiple objectives. Experiments on image classification show the merits of the proposed partially adversarial domain adaptation over different tasks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A New Signal-Selective Wide-Band Ambiguity Function.\n \n \n \n \n\n\n \n Napolitano, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903148,\n  author = {A. Napolitano},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A New Signal-Selective Wide-Band Ambiguity Function},\n  year = {2019},\n  pages = {1-5},\n  abstract = {A new signal-selective wide-band cross-ambiguity function is introduced. It performs a sinusoidally weighted correlation of a signal with a time-scaled and delayed version of a reference signal. If the reference signal is almost cyclostationary and the frequency of the weighting sinusoid is one of its cycle frequencies properly scaled, the new function, referred to as the wide-band cyclic cross-correlation function, exhibits the signal selectivity properties that are typical of cyclostationarity-based techniques. The new function is exploited for the localization and speed estimation of a wide-band moving source whose signal impinges on two sensors in a severe noise and interference environment.},\n  keywords = {correlation theory;sensors;signal processing;cycle frequencies;wide-band cyclic cross-correlation function;signal selectivity properties;wide-band moving source;ambiguity function;signal-selective wide-band cross-ambiguity;sinusoidally weighted correlation;reference signal;interference environment;speed estimation;noise environment;cyclostationarity-based techniques;sensors;localization;Sensors;Estimation;Interference;Doppler effect;Europe;Signal processing;Time-frequency analysis;cyclostationarity;wide-band ambiguity function;moving source localization},\n  doi = {10.23919/EUSIPCO.2019.8903148},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533130.pdf},\n}\n\n
\n
\n\n\n
\n A new signal-selective wide-band cross-ambiguity function is introduced. It performs a sinusoidally weighted correlation of a signal with a time-scaled and delayed version of a reference signal. If the reference signal is almost cyclostationary and the frequency of the weighting sinusoid is one of its cycle frequencies properly scaled, the new function, referred to as the wide-band cyclic cross-correlation function, exhibits the signal selectivity properties that are typical of cyclostationarity-based techniques. The new function is exploited for the localization and speed estimation of a wide-band moving source whose signal impinges on two sensors in a severe noise and interference environment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analytical Method to Convert Circular Harmonic Expansion Coefficients for Sound Field Synthesis by Using Multipole Loudspeaker Array.\n \n \n \n \n\n\n \n Tsutsumi, K.; Imaizumi, K.; Nakadaira, A.; and Haneda, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AnalyticalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903149,\n  author = {K. Tsutsumi and K. Imaizumi and A. Nakadaira and Y. Haneda},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Analytical Method to Convert Circular Harmonic Expansion Coefficients for Sound Field Synthesis by Using Multipole Loudspeaker Array},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We propose a method to synthesize the two-dimensional (2D) exterior sound field of a directional sound source using a Cartesian multipole loudspeaker array in which each loudspeaker unit is located on a Cartesian grid. We also propose an analytical method that models the sound field of the desired directional source in order to obtain weighting coefficients for each multipole from the circular harmonic expansion coefficients. The conversion method is derived by comparing the sound field created by a higher-order derivative of the free field Green's function and the corresponding field expressed by circular harmonic expansion coefficients in 2D space. In contrast to an existing analytical conversion method, the proposed method reproduces not only directivity patterns but also phases of the radiated sound from a target sound source, thereby enabling accurate sound field synthesis. We used numerical simulations to show that the proposed method achieved more accurate sound field reproduction than an existing pressure-matching-based method at higher frequency regions.},\n  keywords = {acoustic field;acoustic radiators;acoustic signal processing;Green's function methods;loudspeakers;sound reproduction;analytical method;convert circular harmonic expansion coefficients;two-dimensional exterior sound field;directional sound source;Cartesian multipole loudspeaker array;loudspeaker unit;Cartesian grid;desired directional source;weighting coefficients;higher-order derivative;free field Green's function;corresponding field;existing analytical conversion method;directivity patterns;radiated sound;target sound source;accurate sound field synthesis;sound field reproduction;pressure-matching-based method;Harmonic analysis;Two dimensional displays;Green's function methods;Loudspeakers;Europe;Acoustics;multipole loudspeaker array;circular harmonics;sound field synthesis;analytical method;Cartesian multipole},\n  doi = {10.23919/EUSIPCO.2019.8903149},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532633.pdf},\n}\n\n
\n
\n\n\n
\n We propose a method to synthesize the two-dimensional (2D) exterior sound field of a directional sound source using a Cartesian multipole loudspeaker array in which each loudspeaker unit is located on a Cartesian grid. We also propose an analytical method that models the sound field of the desired directional source in order to obtain weighting coefficients for each multipole from the circular harmonic expansion coefficients. The conversion method is derived by comparing the sound field created by a higher-order derivative of the free field Green's function and the corresponding field expressed by circular harmonic expansion coefficients in 2D space. In contrast to an existing analytical conversion method, the proposed method reproduces not only directivity patterns but also phases of the radiated sound from a target sound source, thereby enabling accurate sound field synthesis. We used numerical simulations to show that the proposed method achieved more accurate sound field reproduction than an existing pressure-matching-based method at higher frequency regions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rethinking Sampling in Parallel MRI: A Data-Driven Approach.\n \n \n \n \n\n\n \n Gözcü, B.; Sanchez, T.; and Cevher, V.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"RethinkingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903150,\n  author = {B. Gözcü and T. Sanchez and V. Cevher},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Rethinking Sampling in Parallel MRI: A Data-Driven Approach},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In the last decade, Compressive Sensing (CS) has emerged as the most promising, model-driven approach to accelerate MRI scans. CS relies on the key sparsity assumption and proposes random sampling for data acquisition. The practical CS approaches in MRI employ variable-density (VD) sampling, where samples are drawn at random based on a parametric probability model which focuses on the center of the Fourier domain. In stark contrast to this model-driven sampling approaches, we propose a data-driven framework for optimizing sampling in parallel (multi-coil) MRI. Our approach does not assume any structure in the data, and instead optimizes a performance metric (e.g. PSNR) for any given reconstruction algorithm, based on our earlier learning-based sampling framework previously applied to 2D MRI which we also extend to 3D MRI setting in this work by employing lazy evaluations in the greedy algorithm. We show boosted performance for the parallel MRI based on this sampling approach and highlight the inefficiency of variable density approaches. This suggests that data-driven sampling methods could be the key to unlocking the full power of CS applied to MRI.},\n  keywords = {biomedical MRI;compressed sensing;data acquisition;greedy algorithms;image reconstruction;image sampling;medical image processing;probability;variable density approaches;data-driven sampling methods;parallel MRI;compressive sensing;MRI scans;sparsity assumption;random sampling;data acquisition;variable-density sampling;parametric probability model;model-driven sampling approaches;data-driven framework;sampling approach;learning-based sampling;Fourier domain;reconstruction algorithm;3D MRI;greedy algorithm;Magnetic resonance imaging;Decoding;Greedy algorithms;Compressed sensing;Three-dimensional displays;Signal processing algorithms;Training;Parallel MRI;compressive sensing;learning-based subsampling;greedy algorithm},\n  doi = {10.23919/EUSIPCO.2019.8903150},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534135.pdf},\n}\n\n
\n
\n\n\n
\n In the last decade, Compressive Sensing (CS) has emerged as the most promising, model-driven approach to accelerate MRI scans. CS relies on the key sparsity assumption and proposes random sampling for data acquisition. The practical CS approaches in MRI employ variable-density (VD) sampling, where samples are drawn at random based on a parametric probability model which focuses on the center of the Fourier domain. In stark contrast to this model-driven sampling approaches, we propose a data-driven framework for optimizing sampling in parallel (multi-coil) MRI. Our approach does not assume any structure in the data, and instead optimizes a performance metric (e.g. PSNR) for any given reconstruction algorithm, based on our earlier learning-based sampling framework previously applied to 2D MRI which we also extend to 3D MRI setting in this work by employing lazy evaluations in the greedy algorithm. We show boosted performance for the parallel MRI based on this sampling approach and highlight the inefficiency of variable density approaches. This suggests that data-driven sampling methods could be the key to unlocking the full power of CS applied to MRI.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hyperspectral X-ray Denoising: Model-Based and Data-Driven Solutions.\n \n \n \n \n\n\n \n Bonettini, N.; Paracchini, M.; Bestagini, P.; Marcon, M.; and Tubaro, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"HyperspectralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903151,\n  author = {N. Bonettini and M. Paracchini and P. Bestagini and M. Marcon and S. Tubaro},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Hyperspectral X-ray Denoising: Model-Based and Data-Driven Solutions},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper we deal with the problem of hyperspectral X-Ray image denoising. In particular, we compare a classical model-based Wiener filter solution with a datadriven methodology based on a Convolutional Autoencoder. A challenging aspect is related to the specific kind of 2D signal we are processing: it presents mixed dimensions information since on the vertical axis there is the pixels position while, on the abscissa, there are the different wavelengths associated to the acquired X-Ray spectrum. The goal is to approximate the denoising function using a learning-from-data approach and to verify its capability to emulate the Wiener filter using a much less demanding approach in terms of signal and noise statistical knowledge. We show that, after training, the CNN is able to properly restore the 2D signal with results very close to the Wiener filter, honouring the proper signal shape.},\n  keywords = {image denoising;learning (artificial intelligence);statistical analysis;Wiener filters;data-driven solutions;convolutional autoencoder;pixels position;learning-from-data approach;noise statistical knowledge;hyperspectral X-ray image denoising;model-based Wiener filter solution;Convolution;Two dimensional displays;Noise reduction;Hyperspectral imaging;X-ray imaging;Image denoising;Shape;Hyperspectral Imaging;Image Denoising;Convolutional Autoencoder;Machine Vision},\n  doi = {10.23919/EUSIPCO.2019.8903151},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533635.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we deal with the problem of hyperspectral X-Ray image denoising. In particular, we compare a classical model-based Wiener filter solution with a datadriven methodology based on a Convolutional Autoencoder. A challenging aspect is related to the specific kind of 2D signal we are processing: it presents mixed dimensions information since on the vertical axis there is the pixels position while, on the abscissa, there are the different wavelengths associated to the acquired X-Ray spectrum. The goal is to approximate the denoising function using a learning-from-data approach and to verify its capability to emulate the Wiener filter using a much less demanding approach in terms of signal and noise statistical knowledge. We show that, after training, the CNN is able to properly restore the 2D signal with results very close to the Wiener filter, honouring the proper signal shape.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reliable Demosaicing Detection for Image Forensics.\n \n \n \n \n\n\n \n Bammey, Q.; v. Gioi, R. G.; and Morel, J. -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ReliablePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903152,\n  author = {Q. Bammey and R. G. v. Gioi and J. -M. Morel},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Reliable Demosaicing Detection for Image Forensics},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Visually plausible image forgeries are easy to create even without particular knowledge or skills. However, most forgeries unknowingly alter the underlying statistics of an image. In particular, demosaicing artefacts created by the camera are usually destroyed or modified when the image is tampered. Most of the literature focus on detecting where these traces are destroyed, and generally do it in a way that still requires a visual interpretation. We introduce an a contrariomethod which detects global demosaicing parameters, and then checks for regions of the image which are inconsistent with these parameters. Detections are guaranteed in the form of a number of false alarms (NFA), which enables the user to control the false positive rate. Such a guarantee is a useful complement to existing methods, and enables inclusion into fully automatic image authentication processes. The source code and an online demo are provided with the article.},\n  keywords = {image forensics;image segmentation;reliability;reliable demosaicing detection;image forensics;statistical analysis;image authentication processes;visually plausible image forgeries;camera;global demosaicing parameter detection;number of false alarm formation;NFA formation;source code;Forgery;Interpolation;Image color analysis;Signal processing algorithms;Reliability;Cameras;Estimation;image forgery;forgery detection;forgery;CFA interpolation;CFA;color filter array;demosaicing;demosaicking;filter estimation;linear estimation;a contrario;tampering;artefact detection;Bayer matrix},\n  doi = {10.23919/EUSIPCO.2019.8903152},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533984.pdf},\n}\n\n
\n
\n\n\n
\n Visually plausible image forgeries are easy to create even without particular knowledge or skills. However, most forgeries unknowingly alter the underlying statistics of an image. In particular, demosaicing artefacts created by the camera are usually destroyed or modified when the image is tampered. Most of the literature focus on detecting where these traces are destroyed, and generally do it in a way that still requires a visual interpretation. We introduce an a contrariomethod which detects global demosaicing parameters, and then checks for regions of the image which are inconsistent with these parameters. Detections are guaranteed in the form of a number of false alarms (NFA), which enables the user to control the false positive rate. Such a guarantee is a useful complement to existing methods, and enables inclusion into fully automatic image authentication processes. The source code and an online demo are provided with the article.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Validation of Baseline Wander Removal and Isoelectric Correction in Electrocardiograms Using Clustering.\n \n \n \n \n\n\n \n Le, K.; Eftestøl, T.; Engan, K.; Kleiven, Ø.; and Ørn, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ValidationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903153,\n  author = {K. Le and T. Eftestøl and K. Engan and Ø. Kleiven and S. Ørn},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Validation of Baseline Wander Removal and Isoelectric Correction in Electrocardiograms Using Clustering},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The removal of baseline wander from the electrocardiogram does not always correct the isoelectric level of the signal. That is, to enforce reference points for amplitude measurements to 0 volt. In this work new parameters and a modified clustering method is proposed to find the isoelectric bias, which is the amplitude difference between the reference point and 0 volt prior to the isoelectric correction. Validation with old parameters and previous iteration of the clustering method are also performed on the QT database from PhysioNet. Both methods are viable to find the isoelectric bias in the PQ-segment. 3 assessment criteria are used. The first criterion (A) is based on the location of the isoelectric bias with respect to the PQ-segment. The second criterion (B) is based on the absolute difference between the isoelectric bias and median amplitude of the PQ-segment. And the third criterion (C) is the mean and standard deviation of the absolute difference between the isoelectric bias and median amplitude of the PQ-segment.Best results are obtained with the new proposed method with a probability of true detection (PD) of 99.62 pct (A), PD of 98.40 pct (B), mean of 6.28 microvolt (C) and standard deviation of 12.03 microvolt (C) respectively for the criteria mentioned above.},\n  keywords = {electrocardiography;interference (signal);statistical analysis;baseline wander removal;isoelectric correction;electrocardiograms clustering;signal isoelectric level;amplitude measurements reference points;isoelectric bias;QT database;isoelectric bias-median amplitude difference;standard deviation;Annotations;Databases;Manuals;Standards;Europe;Signal processing;Clustering methods;Baseline wander;Isoelectric correction},\n  doi = {10.23919/EUSIPCO.2019.8903153},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529931.pdf},\n}\n\n
\n
\n\n\n
\n The removal of baseline wander from the electrocardiogram does not always correct the isoelectric level of the signal. That is, to enforce reference points for amplitude measurements to 0 volt. In this work new parameters and a modified clustering method is proposed to find the isoelectric bias, which is the amplitude difference between the reference point and 0 volt prior to the isoelectric correction. Validation with old parameters and previous iteration of the clustering method are also performed on the QT database from PhysioNet. Both methods are viable to find the isoelectric bias in the PQ-segment. 3 assessment criteria are used. The first criterion (A) is based on the location of the isoelectric bias with respect to the PQ-segment. The second criterion (B) is based on the absolute difference between the isoelectric bias and median amplitude of the PQ-segment. And the third criterion (C) is the mean and standard deviation of the absolute difference between the isoelectric bias and median amplitude of the PQ-segment.Best results are obtained with the new proposed method with a probability of true detection (PD) of 99.62 pct (A), PD of 98.40 pct (B), mean of 6.28 microvolt (C) and standard deviation of 12.03 microvolt (C) respectively for the criteria mentioned above.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n FISTA: achieving a rate of convergence proportional to k-3 for small/medium values of k.\n \n \n \n\n\n \n Silva, G.; and Rodriguez, P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903154,\n  author = {G. Silva and P. Rodriguez},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {FISTA: achieving a rate of convergence proportional to k-3 for small/medium values of k},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The fast iterative shrinkage-thresholding algorithm (FISTA) is a widely used procedure for minimizing the sum of two convex functions, such that one has a L-Lipschitz continuous gradient and the other is possible nonsmooth. While FISTA's theoretical rate of convergence (RoC) is proportional to (1/αktk2), and it is related to (i) its extragradient rule / inertial sequence, which depends on sequence tk, and (ii) the step-size αk, which estimates L, its worst-case complexity results in O(k-2) since, originally, (i) by construction tk ≥ (k+1/2), and (ii) the condition αk ≥ αk+1 was imposed. Attempts to improve FISTA's RoC include alternative inertial sequences, and intertwining the selection of the inertial sequence and the step-size.In this paper, we show that if a bounded and non-decreasing step-size sequence (αk ≤ αk+1, decoupled from the inertial sequence) can be generated via some adaptive scheme, then FISTA can achieve a RoC proportional to k-3 for the indexes where the step-size exhibits an approximate linear growth, with the default O(k-2) behavior when the step-size's bound is reached. Furthermore, such exceptional step-size sequence can be easily generated, and it indeed boots FISTA's practical performance.},\n  keywords = {approximation theory;computational complexity;convergence of numerical methods;gradient methods;iterative methods;minimisation;worst-case complexity results;FISTA's RoC;inertial sequences;RoC proportional;exceptional step-size sequence;fast iterative shrinkage-thresholding algorithm;convex functions;L-Lipschitz continuous gradient;FISTA's practical performance;approximate linear growth;Convergence;Radio frequency;Search problems;Europe;Signal processing;Gradient methods;FISTA;step-size;convolutional sparse representations.},\n  doi = {10.23919/EUSIPCO.2019.8903154},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The fast iterative shrinkage-thresholding algorithm (FISTA) is a widely used procedure for minimizing the sum of two convex functions, such that one has a L-Lipschitz continuous gradient and the other is possible nonsmooth. While FISTA's theoretical rate of convergence (RoC) is proportional to (1/αktk2), and it is related to (i) its extragradient rule / inertial sequence, which depends on sequence tk, and (ii) the step-size αk, which estimates L, its worst-case complexity results in O(k-2) since, originally, (i) by construction tk ≥ (k+1/2), and (ii) the condition αk ≥ αk+1 was imposed. Attempts to improve FISTA's RoC include alternative inertial sequences, and intertwining the selection of the inertial sequence and the step-size.In this paper, we show that if a bounded and non-decreasing step-size sequence (αk ≤ αk+1, decoupled from the inertial sequence) can be generated via some adaptive scheme, then FISTA can achieve a RoC proportional to k-3 for the indexes where the step-size exhibits an approximate linear growth, with the default O(k-2) behavior when the step-size's bound is reached. Furthermore, such exceptional step-size sequence can be easily generated, and it indeed boots FISTA's practical performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cost-Aware Dual Prediction Scheme for Reducing Transmissions at IoT Sensor Nodes.\n \n \n \n \n\n\n \n Håkansson, V. W.; Venkategowda, N. K. D.; Kraemer, F. A.; and Werner, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Cost-AwarePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903156,\n  author = {V. W. Håkansson and N. K. D. Venkategowda and F. A. Kraemer and S. Werner},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Cost-Aware Dual Prediction Scheme for Reducing Transmissions at IoT Sensor Nodes},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper develops a method for deciding when to update the prediction model or transmit a set of measurements from the sensor to the fusion centre (FC) to achieve minimal data transmission in a dual prediction scheme (DPS). The proposed method chooses a transmission strategy that results in the lowest expected future transmission cost among a given set of strategies. In a practical IoT setting, statistical information of the measurements might be limited. Hence, without assuming any distribution for the measurements, the proposed method estimates the transmission cost for each strategy through bootstrap data where associated model residuals are resampled using the maximum entropy bootstrap algorithm, which preserves several stochastic properties of the empirical distribution. Numerical results with simulated and real world data shows that the proposed method results in significant reduction in the transmitted data.},\n  keywords = {entropy;Internet of Things;statistical analysis;cost-aware dual prediction scheme;IoT sensor;fusion centre;minimal data transmission;practical IoT setting;statistical information;bootstrap data;maximum entropy bootstrap algorithm;simulated world data;transmitted data;Predictive models;Time measurement;Data models;Trajectory;Adaptation models;Measurement uncertainty;Input variables},\n  doi = {10.23919/EUSIPCO.2019.8903156},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570534096.pdf},\n}\n\n
\n
\n\n\n
\n This paper develops a method for deciding when to update the prediction model or transmit a set of measurements from the sensor to the fusion centre (FC) to achieve minimal data transmission in a dual prediction scheme (DPS). The proposed method chooses a transmission strategy that results in the lowest expected future transmission cost among a given set of strategies. In a practical IoT setting, statistical information of the measurements might be limited. Hence, without assuming any distribution for the measurements, the proposed method estimates the transmission cost for each strategy through bootstrap data where associated model residuals are resampled using the maximum entropy bootstrap algorithm, which preserves several stochastic properties of the empirical distribution. Numerical results with simulated and real world data shows that the proposed method results in significant reduction in the transmitted data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Temporal Dependency Model for Rate-Distortion Optimization in Video Coding.\n \n \n \n \n\n\n \n Han, J.; Wilkins, P.; Xu, Y.; and Bankoski, J.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-4, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903158,\n  author = {J. Han and P. Wilkins and Y. Xu and J. Bankoski},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Temporal Dependency Model for Rate-Distortion Optimization in Video Coding},\n  year = {2019},\n  pages = {1-4},\n  abstract = {Video codec heavily relies on motion compensated prediction to achieve compression efficiency. The predictive scheme creates temporal dependency across frames, i.e., the quantization distortion in a current block may propagate through motion compensated prediction and affect the coding efficiency of blocks in subsequent frames. The ability to capture such dependency can potentially improve the rate-distortion optimization for coding performance gains. Prior research work builds block-based motion trajectories and uses the correlations between source pixel blocks in the same motion trajectory to estimate the distortion propagation model. This work premises on the realization that the distortion propagation is also largely related to the quantization effect. A novel temporal dependency model that accounts for both block correlation and the quantization effect is proposed. It is experimentally shown to provide considerable compression gains over the existing competitors.},\n  keywords = {data compression;motion compensation;motion estimation;video codecs;video coding;rate-distortion optimization;video coding;video codec;motion compensated prediction;compression efficiency;predictive scheme;quantization distortion;current block;subsequent frames;block-based motion trajectories;source pixel blocks;motion trajectory;distortion propagation model;quantization effect;novel temporal dependency model;block correlation;considerable compression gains;Distortion;Quantization (signal);Encoding;Trajectory;Technological innovation;Rate-distortion;Optimization;motion compensated prediction;rate-distortion optimization;temporal dependency;video compression},\n  doi = {10.23919/EUSIPCO.2019.8903158},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533484.pdf},\n}\n\n
\n
\n\n\n
\n Video codec heavily relies on motion compensated prediction to achieve compression efficiency. The predictive scheme creates temporal dependency across frames, i.e., the quantization distortion in a current block may propagate through motion compensated prediction and affect the coding efficiency of blocks in subsequent frames. The ability to capture such dependency can potentially improve the rate-distortion optimization for coding performance gains. Prior research work builds block-based motion trajectories and uses the correlations between source pixel blocks in the same motion trajectory to estimate the distortion propagation model. This work premises on the realization that the distortion propagation is also largely related to the quantization effect. A novel temporal dependency model that accounts for both block correlation and the quantization effect is proposed. It is experimentally shown to provide considerable compression gains over the existing competitors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Offline Noise Reduction Using Optimal Mass Transport Induced Covariance Interpolation.\n \n \n \n \n\n\n \n Elvander, F.; Ali, R.; Jakobsson, A.; and v. Waterschoot, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OfflinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903159,\n  author = {F. Elvander and R. Ali and A. Jakobsson and T. v. Waterschoot},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Offline Noise Reduction Using Optimal Mass Transport Induced Covariance Interpolation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this work, we propose to utilize a recently developed covariance matrix interpolation technique in order to improve noise reduction in multi-microphone setups in the presence of a moving, localized noise source. Based on the concept of optimal mass transport, the proposed method induces matrix interpolants implying smooth spatial displacement of the noise source, allowing for physically reasonable reconstructions of the noise source trajectory. As this trajectory is constructed as to connect two observed, or estimated, covariance matrices, the proposed method is suggested for offline applications. The performance of the proposed method is demonstrated using simulations of a speech enhancement scenario.},\n  keywords = {covariance matrices;interpolation;microphones;speech enhancement;offline noise reduction;optimal mass transport;multimicrophone setups;physically reasonable reconstructions;covariance matrix interpolation technique;speech enhancement;Covariance matrices;Interpolation;Noise reduction;Noise measurement;Speech enhancement;Microphones;Europe;Noise reduction;speech enhancement;optimal mass transport;covariance interpolation},\n  doi = {10.23919/EUSIPCO.2019.8903159},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528019.pdf},\n}\n\n
\n
\n\n\n
\n In this work, we propose to utilize a recently developed covariance matrix interpolation technique in order to improve noise reduction in multi-microphone setups in the presence of a moving, localized noise source. Based on the concept of optimal mass transport, the proposed method induces matrix interpolants implying smooth spatial displacement of the noise source, allowing for physically reasonable reconstructions of the noise source trajectory. As this trajectory is constructed as to connect two observed, or estimated, covariance matrices, the proposed method is suggested for offline applications. The performance of the proposed method is demonstrated using simulations of a speech enhancement scenario.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Learning Methods for RSSI-based Geolocation: A Comparative Study.\n \n \n \n \n\n\n \n Elgui, K.; Bianchi, P.; Portier, F.; and Isson, O.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"LearningPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903160,\n  author = {K. Elgui and P. Bianchi and F. Portier and O. Isson},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Learning Methods for RSSI-based Geolocation: A Comparative Study},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we investigate machine learning approaches addressing the problem of geolocation. First, we review some classical learning methods to build a radio map. In particular, these methods are splitted in two categories, which we refer to as likelihood-based methods and fingerprinting methods. Then, we provide a novel geolocation approach in each of these two categories. The first proposed technique relies on a semi-parametric Nadaraya-Watson estimator of the likelihood, followed by a maximum a posteriori (MAP) estimator of the object's position. The second technique consists in learning a proper metric on the dataset, constructed by means of a Gradient boosting regressor: a k-nearest neighbor algorithm is then used to estimate the position. Finally, all the proposed methods are compared on a data set originated from Sigfox network. The experiments show the interest of the proposed methods, both in terms of location estimation performance, and of ability to build radio maps.},\n  keywords = {estimation theory;geographic information systems;indoor radio;learning (artificial intelligence);maximum likelihood estimation;nonlinear estimation;regression analysis;spatiotemporal phenomena;UHF radio propagation;novel geolocation approach;semiparametric Nadaraya-Watson estimator;posteriori estimator;location estimation performance;radio map;RSSI-based geolocation;machine learning approaches;classical learning methods;likelihood-based methods;fingerprinting methods;Geology;Measurement;Standards;Task analysis;Europe;Received signal strength indicator;LPWA Network;localization;maximum likelihood;metric learning},\n  doi = {10.23919/EUSIPCO.2019.8903160},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529263.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we investigate machine learning approaches addressing the problem of geolocation. First, we review some classical learning methods to build a radio map. In particular, these methods are splitted in two categories, which we refer to as likelihood-based methods and fingerprinting methods. Then, we provide a novel geolocation approach in each of these two categories. The first proposed technique relies on a semi-parametric Nadaraya-Watson estimator of the likelihood, followed by a maximum a posteriori (MAP) estimator of the object's position. The second technique consists in learning a proper metric on the dataset, constructed by means of a Gradient boosting regressor: a k-nearest neighbor algorithm is then used to estimate the position. Finally, all the proposed methods are compared on a data set originated from Sigfox network. The experiments show the interest of the proposed methods, both in terms of location estimation performance, and of ability to build radio maps.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimation of a Low-Rank Probability-Tensor from Sample Sub-Tensors via Joint Factorization Minimizing the Kullback-Leibler Divergence.\n \n \n \n \n\n\n \n Yeredor, A.; and Haardt, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EstimationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903161,\n  author = {A. Yeredor and M. Haardt},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Estimation of a Low-Rank Probability-Tensor from Sample Sub-Tensors via Joint Factorization Minimizing the Kullback-Leibler Divergence},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Recently there has been a growing interest in the estimation of the Probability Mass Function (PMF) of discrete random vectors (RVs) from partial observations thereof (namely when observed realizations of the RV are limited to random subsets of its elements). It was shown that under a low-rank assumption on the PMF tensor (and some additional mild conditions), the full tensor can be recovered, e.g., by applying an approximate coupled factorization to empirical estimates of all joint PMFs of subgroups of fixed cardinality larger than two (e.g., triplets). The coupled factorization is based on a Least Squares (LS) fit to the empirically estimated lower-order sub-tensors. In this work we take a different approach by trying to fit the coupled factorization to estimated sub-tensors in the sense of minimizing the Kullback-Leibler divergence (KLD) between the estimated and inferred tensors. We explain why the KLD-based fitting is better-suited than LS-based fitting for the problem of PMF estimation, propose an associated minimization approach and demonstrate some advantages over LS-based fitting in this context using simulation results.},\n  keywords = {least squares approximations;minimisation;probability;tensors;vectors;PMF tensor;lower-order sub-tensors;Kullback-Leibler divergence;PMF estimation;minimization approach;probability mass function;discrete random vectors;low-rank assumption;KLD-based fitting;LS-based fitting;low-rank probability-tensor;least squares fit;Tensors;Minimization;Signal processing;Europe;Maximum likelihood estimation;Loading;Low-Rank Tensor Factorization;Probability Mass Function (PMF);Approximate Coupled Factorization;Kullback-Leibler Divergence (KLD);Nonnegative Tensor Factorization;Canonical Polyadic Decomposition (CPD)},\n  doi = {10.23919/EUSIPCO.2019.8903161},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533520.pdf},\n}\n\n
\n
\n\n\n
\n Recently there has been a growing interest in the estimation of the Probability Mass Function (PMF) of discrete random vectors (RVs) from partial observations thereof (namely when observed realizations of the RV are limited to random subsets of its elements). It was shown that under a low-rank assumption on the PMF tensor (and some additional mild conditions), the full tensor can be recovered, e.g., by applying an approximate coupled factorization to empirical estimates of all joint PMFs of subgroups of fixed cardinality larger than two (e.g., triplets). The coupled factorization is based on a Least Squares (LS) fit to the empirically estimated lower-order sub-tensors. In this work we take a different approach by trying to fit the coupled factorization to estimated sub-tensors in the sense of minimizing the Kullback-Leibler divergence (KLD) between the estimated and inferred tensors. We explain why the KLD-based fitting is better-suited than LS-based fitting for the problem of PMF estimation, propose an associated minimization approach and demonstrate some advantages over LS-based fitting in this context using simulation results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Globally Optimal TIN Strategies with Time-Sharing in the MISO Interference Channel.\n \n \n \n \n\n\n \n Hellings, C.; Matthiesen, B.; Jorswieck, E. A.; and Utschick, W.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"GloballyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903162,\n  author = {C. Hellings and B. Matthiesen and E. A. Jorswieck and W. Utschick},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Globally Optimal TIN Strategies with Time-Sharing in the MISO Interference Channel},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The capacity region of the two-user multiple-input single-output (MISO) interference channel is an open problem, and various achievable rate regions have been discussed in the literature. In this paper, we assume that the transmit signals are Gaussian and that the receivers treat interference as noise (TIN), i.e., we focus on the TIN rate region with Gaussian inputs. Our aim is to compute the rate region boundary for the case of proper Gaussian signaling with time-sharing, i.e., the data rates and required transmit powers may be averaged over several transmit strategies. To this end, we apply methods from convex optimization (in particular Lagrange duality and the cutting plane algorithm), and propose the novel mixed monotonic programming (MMP) framework to treat the arising nonconvex subproblems. The obtained TIN rate region with proper Gaussian signals and time-sharing is significantly larger than previously computed TIN rate regions with proper Gaussian signals, and can even outperform TIN strategies with improper signaling.},\n  keywords = {array signal processing;broadcast channels;channel capacity;concave programming;convex programming;Gaussian channels;Gaussian processes;radiofrequency interference;telecommunication signalling;wireless channels;MISO interference channel;capacity region;multiple-input single-output interference channel;achievable rate regions;transmit signals;TIN rate region;Gaussian inputs;rate region boundary;time-sharing;data rates;required transmit powers;transmit strategies;proper Gaussian signals;globally optimal TIN strategies;Tin;Signal processing algorithms;Optimization;Upper bound;MISO communication;Interference channels;Programming;Improper signaling;interference channel;Lagrange duality;monotonic optimization;time-sharing},\n  doi = {10.23919/EUSIPCO.2019.8903162},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533914.pdf},\n}\n\n
\n
\n\n\n
\n The capacity region of the two-user multiple-input single-output (MISO) interference channel is an open problem, and various achievable rate regions have been discussed in the literature. In this paper, we assume that the transmit signals are Gaussian and that the receivers treat interference as noise (TIN), i.e., we focus on the TIN rate region with Gaussian inputs. Our aim is to compute the rate region boundary for the case of proper Gaussian signaling with time-sharing, i.e., the data rates and required transmit powers may be averaged over several transmit strategies. To this end, we apply methods from convex optimization (in particular Lagrange duality and the cutting plane algorithm), and propose the novel mixed monotonic programming (MMP) framework to treat the arising nonconvex subproblems. The obtained TIN rate region with proper Gaussian signals and time-sharing is significantly larger than previously computed TIN rate regions with proper Gaussian signals, and can even outperform TIN strategies with improper signaling.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cardiac Motion Estimation Using Convolutional Sparse Coding.\n \n \n \n \n\n\n \n Diaz, N.; Basarab, A.; Tourneret, J. -.; and Fuentes, H. A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"CardiacPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903163,\n  author = {N. Diaz and A. Basarab and J. -Y. Tourneret and H. A. Fuentes},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Cardiac Motion Estimation Using Convolutional Sparse Coding},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper studies a new motion estimation method based on convolutional sparse coding. The motion estimation problem is formulated as the minimization of a cost function composed of a data fidelity term, a spatial smoothness constraint, and a regularization based on convolution sparse coding. We study the potential interest of using a convolutional dictionary instead of a standard dictionary using specific examples. Moreover, the proposed method is evaluated in terms of motion estimation accuracy and compared with state-of-the-art algorithms, showing its interest for cardiac motion estimation.},\n  keywords = {image representation;medical image processing;motion estimation;convolutional sparse coding;convolutional dictionary;cardiac motion estimation;cost function minimization;data fidelity;spatial smoothness constraint;Dictionaries;Motion estimation;Convolution;Convolutional codes;Machine learning;Signal processing algorithms;Encoding;Ultrasound imaging;cardiac motion estimation;Convolutional dictionary;sparse representation},\n  doi = {10.23919/EUSIPCO.2019.8903163},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533651.pdf},\n}\n\n
\n
\n\n\n
\n This paper studies a new motion estimation method based on convolutional sparse coding. The motion estimation problem is formulated as the minimization of a cost function composed of a data fidelity term, a spatial smoothness constraint, and a regularization based on convolution sparse coding. We study the potential interest of using a convolutional dictionary instead of a standard dictionary using specific examples. Moreover, the proposed method is evaluated in terms of motion estimation accuracy and compared with state-of-the-art algorithms, showing its interest for cardiac motion estimation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Non-Destructive Prediction of Pork Meat Degradation using a Stacked Autoencoder Classifier on Hyperspectral Images.\n \n \n \n \n\n\n \n Gallo, B. B.; de Almeida , S. J. M.; Bermudez, J. C. M.; Chen, J.; and Richard, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Non-DestructivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903164,\n  author = {B. B. Gallo and S. J. M. {de Almeida} and J. C. M. Bermudez and J. Chen and C. Richard},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Non-Destructive Prediction of Pork Meat Degradation using a Stacked Autoencoder Classifier on Hyperspectral Images},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This work presents initial results on a multitemporal hyperspectral image analysis method to evaluate the time degradation of pork meat. The proposed method is inexpensive and practically non-destructive. The hyperspectral data is analyzed and the relevant information is reduced to the information in only three wavelengths. The analysis is performed by a binary classifier composed by two stacked autoencoders and a softmax output layer. The use of autoencoders reduces tenfold the dimension of the input space. The proposed classifier has led to 97.2% of correct decisions, which indicates the great potential of the methodology to monitor the safety of meat.},\n  keywords = {food products;food safety;hyperspectral imaging;image classification;learning (artificial intelligence);neural nets;nondestructive testing;product quality;production engineering computing;multitemporal hyperspectral image analysis method;binary classifier;softmax output layer;nondestructive prediction;pork meat degradation;stacked autoencoder classifier;safety monitoring;product quality;machine learning;Training;Hyperspectral imaging;Cost function;Degradation;Neural networks;Hypercubes;Hyperspectral imaging;meat quality assessment;machine learning;neural network},\n  doi = {10.23919/EUSIPCO.2019.8903164},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533885.pdf},\n}\n\n
\n
\n\n\n
\n This work presents initial results on a multitemporal hyperspectral image analysis method to evaluate the time degradation of pork meat. The proposed method is inexpensive and practically non-destructive. The hyperspectral data is analyzed and the relevant information is reduced to the information in only three wavelengths. The analysis is performed by a binary classifier composed by two stacked autoencoders and a softmax output layer. The use of autoencoders reduces tenfold the dimension of the input space. The proposed classifier has led to 97.2% of correct decisions, which indicates the great potential of the methodology to monitor the safety of meat.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Unmixing-Based Change Detection Approach for Multiresolution Remote Sensing Images.\n \n \n \n \n\n\n \n Benkouider, Y. K.; and Karoui, M. S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903165,\n  author = {Y. K. Benkouider and M. S. Karoui},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {An Unmixing-Based Change Detection Approach for Multiresolution Remote Sensing Images},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, a new method is proposed to reveal changes using multiresolution and multitemporal remote sensing images. The considered method consists to provide two latent 2D variables with the same resolutions using the linear spectral unmixing concept based on nonnegative matrix factorization. Changes will then be discernible by comparing these two 2D variables or by applying classical change detection method, such as change vector analysis, on these variables. Simulation results presented show that the proposed approach is efficient for identifying changes occurred between two optical images with different spatial and spectral resolutions.},\n  keywords = {geophysical image processing;image classification;image resolution;remote sensing;multiresolution remote sensing images;multitemporal remote sensing images;considered method;linear spectral unmixing concept;nonnegative matrix factorization;classical change detection method;change vector analysis;optical images;spatial resolutions;spectral resolutions;Spatial resolution;Hyperspectral imaging;Two dimensional displays;Cost function;Change detection;multiresolution and multitemporal remote sensing images;linear spectral unmixing;nonnegative matrix factorization},\n  doi = {10.23919/EUSIPCO.2019.8903165},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533696.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a new method is proposed to reveal changes using multiresolution and multitemporal remote sensing images. The considered method consists to provide two latent 2D variables with the same resolutions using the linear spectral unmixing concept based on nonnegative matrix factorization. Changes will then be discernible by comparing these two 2D variables or by applying classical change detection method, such as change vector analysis, on these variables. Simulation results presented show that the proposed approach is efficient for identifying changes occurred between two optical images with different spatial and spectral resolutions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A Privacy-Preserving Asynchronous Averaging Algorithm based on Shamir’s Secret Sharing.\n \n \n \n\n\n \n Li, Q.; and Christensen, M. G.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@InProceedings{8903166,\n  author = {Q. Li and M. G. Christensen},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Privacy-Preserving Asynchronous Averaging Algorithm based on Shamir’s Secret Sharing},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Average consensus is widely used in information fusion, and it requires information exchange between a set of nodes to achieve an agreement. Unfortunately, the information exchange may disclose the individual's private information, and this raises serious concerns for individual privacy in some applications. Hence, a privacy-preserving asynchronous averaging algorithm is proposed in this paper to maintain the privacy of each individual using Shamir's secret sharing scheme, as known from secure multiparty computation. The proposed algorithm is based on a lightweight cryptographic technique. It gives identical accuracy solution as the non-privacy concerned algorithm and achieves perfect security in clique-based networks without the use of a trusted third party. In each iteration of the algorithm, each individual's privacy in the selected clique is protected under a passive attack where the adversary controls some of the nodes. Finally, it also achieves robustness of up to one third transmission error.},\n  keywords = {computational complexity;cryptographic protocols;data privacy;telecommunication security;information exchange;individual privacy;privacy-preserving asynchronous averaging algorithm;Shamir's secret sharing scheme;nonprivacy concerned algorithm;clique-based networks;average consensus;information fusion;Signal processing algorithms;Privacy;Information exchange;Differential privacy;Encryption;Protocols;Distributed average consensus;Shamir’s secret sharing;privacy-preserving;active attack;secure multiparty computation;Distributed average consensus, Shamir;s secret sharing, privacy-preserving, active attack, secure multiparty computation},\n  doi = {10.23919/EUSIPCO.2019.8903166},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Average consensus is widely used in information fusion, and it requires information exchange between a set of nodes to achieve an agreement. Unfortunately, the information exchange may disclose the individual's private information, and this raises serious concerns for individual privacy in some applications. Hence, a privacy-preserving asynchronous averaging algorithm is proposed in this paper to maintain the privacy of each individual using Shamir's secret sharing scheme, as known from secure multiparty computation. The proposed algorithm is based on a lightweight cryptographic technique. It gives identical accuracy solution as the non-privacy concerned algorithm and achieves perfect security in clique-based networks without the use of a trusted third party. In each iteration of the algorithm, each individual's privacy in the selected clique is protected under a passive attack where the adversary controls some of the nodes. Finally, it also achieves robustness of up to one third transmission error.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the Recursions of Robust COMET Algorithm for Convexly Structured Shape Matrix.\n \n \n \n \n\n\n \n Mériaux, B.; Ren, C.; Breloy, A.; El Korso, M. N.; Forste, P.; and Ovarlez, J. . -.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903167,\n  author = {B. Mériaux and C. Ren and A. Breloy and M. N. {El Korso} and P. Forste and J. . -P. Ovarlez},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {On the Recursions of Robust COMET Algorithm for Convexly Structured Shape Matrix},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper addresses robust estimation of structured shape (normalized covariance) matrices. Shape matrices most often own a particular structure depending on the application of interest and taking this structure into account improves estimation accuracy. In the framework of robust estimation, we introduce a recursive robust shape matrix estimation technique based on Tyler'sM -estimate for convexly structured shape matrices. We prove that the proposed estimator is consistent, asymptotically efficient and Gaussian distributed and we notice that it reaches its asymptotic regime faster as the number of recursions increases. Finally, in the particular wide spreaded case of Hermitian persymmetric structure, we study the convergence of the recursions of the proposed algorithm.},\n  keywords = {covariance matrices;estimation theory;Gaussian distribution;convexly structured shape matrices;recursions increases;Hermitian persymmetric structure;robust COMET algorithm;convexly structured shape matrix;normalized covariance;recursive robust shape matrix estimation technique;Gaussian distribution;Shape;Estimation;Covariance matrices;Signal processing algorithms;Convergence;Symmetric matrices;Europe;Robust shape matrix estimation;elliptical distributions;Tyler's M-estimator;structured estimation},\n  doi = {10.23919/EUSIPCO.2019.8903167},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532611.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses robust estimation of structured shape (normalized covariance) matrices. Shape matrices most often own a particular structure depending on the application of interest and taking this structure into account improves estimation accuracy. In the framework of robust estimation, we introduce a recursive robust shape matrix estimation technique based on Tyler'sM -estimate for convexly structured shape matrices. We prove that the proposed estimator is consistent, asymptotically efficient and Gaussian distributed and we notice that it reaches its asymptotic regime faster as the number of recursions increases. Finally, in the particular wide spreaded case of Hermitian persymmetric structure, we study the convergence of the recursions of the proposed algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n IEST: Interpolation-Enhanced Shearlet Transform for Light Field Reconstruction Using Adaptive Separable Convolution.\n \n \n \n \n\n\n \n Gao, Y.; Koch, R.; Bregovic, R.; and Gotchev, A.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"IEST:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903168,\n  author = {Y. Gao and R. Koch and R. Bregovic and A. Gotchev},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {IEST: Interpolation-Enhanced Shearlet Transform for Light Field Reconstruction Using Adaptive Separable Convolution},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The performance of a light field reconstruction algorithm is typically affected by the disparity range of the input Sparsely-Sampled Light Field (SSLF). This paper finds that (i) one of the state-of-the-art video frame interpolation methods, i.e. adaptive Separable Convolution (SepConv), is especially effective for the light field reconstruction on a SSLF with a small disparity range (<; 10 pixels); (ii) one of the state-of-the-art light field reconstruction methods, i.e. Shearlet Transformation (ST), is especially effective in reconstructing a light field from a SSLF with a moderate disparity range (10-20 pixels) or a large disparity range (> 20 pixels). Therefore, to make full use of both methods to solve the challenging light field reconstruction problem on SSLFs with moderate and large disparity ranges, a novel method, referred to as Interpolation-Enhanced Shearlet Transform (IEST), is proposed by incorporating these two approaches in a coarse-to-fine manner. Specifically, ST is employed to give a coarse estimation for the target light field, which is then refined by SepConv to improve the reconstruction quality of parallax views involving small disparity ranges. Experimental results show that IEST outperforms the other state-of-the-art light field reconstruction methods on nine challenging horizontalparallax evaluation SSLF datasets of different real-world scenes with moderate and large disparity ranges.},\n  keywords = {image reconstruction;image sampling;interpolation;transforms;video signal processing;IEST;Interpolation-Enhanced Shearlet Transform;adaptive separable convolution;light field reconstruction algorithm;sparsely-sampled light field;SSLF;moderate disparity range;reconstruction quality;light field reconstruction methods;video frame interpolation methods;light field reconstruction problem;Image reconstruction;Interpolation;Convolution;Transforms;Kernel;Cameras;Reconstruction algorithms;Light Field Reconstruction;Parallax View Generation;Adaptive Separable Convolution;Shearlet Transform;Interpolation-Enhanced Shearlet Transform},\n  doi = {10.23919/EUSIPCO.2019.8903168},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533732.pdf},\n}\n\n
\n
\n\n\n
\n The performance of a light field reconstruction algorithm is typically affected by the disparity range of the input Sparsely-Sampled Light Field (SSLF). This paper finds that (i) one of the state-of-the-art video frame interpolation methods, i.e. adaptive Separable Convolution (SepConv), is especially effective for the light field reconstruction on a SSLF with a small disparity range (<; 10 pixels); (ii) one of the state-of-the-art light field reconstruction methods, i.e. Shearlet Transformation (ST), is especially effective in reconstructing a light field from a SSLF with a moderate disparity range (10-20 pixels) or a large disparity range (> 20 pixels). Therefore, to make full use of both methods to solve the challenging light field reconstruction problem on SSLFs with moderate and large disparity ranges, a novel method, referred to as Interpolation-Enhanced Shearlet Transform (IEST), is proposed by incorporating these two approaches in a coarse-to-fine manner. Specifically, ST is employed to give a coarse estimation for the target light field, which is then refined by SepConv to improve the reconstruction quality of parallax views involving small disparity ranges. Experimental results show that IEST outperforms the other state-of-the-art light field reconstruction methods on nine challenging horizontalparallax evaluation SSLF datasets of different real-world scenes with moderate and large disparity ranges.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Deep Ll-PCA of Time-Variant Data with Application to Brain Connectivity Measurements.\n \n \n \n\n\n \n Orrú, G.; Cattai, T.; Colonnese, S.; Scarano, G.; Fallani, F. D. V.; Markopoulos, P.; and Pados, D.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903169,\n  author = {G. Orrú and T. Cattai and S. Colonnese and G. Scarano and F. D. V. Fallani and P. Markopoulos and D. Pados},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Deep Ll-PCA of Time-Variant Data with Application to Brain Connectivity Measurements},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Ll-Principal Component Analysis (LI-PCA) is a powerful computational tool to identify relevant components in data affected by noise, outliers, partial disruption and so on. Relevant efforts have been made to adapt its powerful summarization capacity to time variant data, e.g. in tracking the evolution of LI-PCA components. Here, we analyze a layered version of LI-PCA, to which we refer to as Deep LI-PCA. Deep LI-PCA is obtained by recursive application of two stages: estimation of LI-PCA basis and extraction of the first rank projector. The Deep LI-PCA is applied to repeated EEG connectivity measures and it proves relevant for identifying outliers, changes, and stable components. Moreover, at each layer, an in-depth analysis of the mean square error between the data applied at the input layer and the output projector is provided. The Deep LI-PCA allows to cope with outliers of different temporal extent as well as to extract the relevant common component at a reduced computational cost.},\n  keywords = {biomedical measurement;electroencephalography;principal component analysis;mean square error;EEG connectivity;brain connectivity measurements;deep Ll-PCA;Ll-principal component analysis;LI-PCA components;time-variant data;Coherence;Europe;Signal processing;Electroencephalography;Frequency measurement;Principal component analysis;Ll-norm;PCA;outliers;first rank component extraction;tensor-based representation of biomedical data},\n  doi = {10.23919/EUSIPCO.2019.8903169},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Ll-Principal Component Analysis (LI-PCA) is a powerful computational tool to identify relevant components in data affected by noise, outliers, partial disruption and so on. Relevant efforts have been made to adapt its powerful summarization capacity to time variant data, e.g. in tracking the evolution of LI-PCA components. Here, we analyze a layered version of LI-PCA, to which we refer to as Deep LI-PCA. Deep LI-PCA is obtained by recursive application of two stages: estimation of LI-PCA basis and extraction of the first rank projector. The Deep LI-PCA is applied to repeated EEG connectivity measures and it proves relevant for identifying outliers, changes, and stable components. Moreover, at each layer, an in-depth analysis of the mean square error between the data applied at the input layer and the output projector is provided. The Deep LI-PCA allows to cope with outliers of different temporal extent as well as to extract the relevant common component at a reduced computational cost.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n LTE-based Wireless Channel Modeling on High-Speed Railway at 465MHz.\n \n \n \n \n\n\n \n Niu, Y.; Ding, J.; Fei, D.; Zhong, Z.; and Liu, Y.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"LTE-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903171,\n  author = {Y. Niu and J. Ding and D. Fei and Z. Zhong and Y. Liu},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {LTE-based Wireless Channel Modeling on High-Speed Railway at 465MHz},\n  year = {2019},\n  pages = {1-5},\n  abstract = {A practical and novel research of radio propagation channel modeling is mainly presented in this paper. This is the first comprehensive result of path loss, large-scale fading and small-scale fading under long-term evolution frequency division duplex railway testing network (LTE-R) whose downlink center frequency is set at 465MHz, bandwidth is 5MHz. Besides, a detailed description of the measurement system as well as the measurement scenario is introduced. Our measurement campaigns were carried out under multiple-input multiple-output with top train speed of 340kmffi. Based on the measurement data, the empirical path loss model is studied. Probability density functions of large-scale fading and small-scale fading are discussed. Channel characteristics such as path loss attenuation exponent, standard deviation of shadowing and Rician K-factor are shown. The statistical channel models are established to characterize the high-speed railway (HSR) scenarios which can contribute to evaluating and significantly promoting the network performance.},\n  keywords = {cellular radio;frequency division multiplexing;Long Term Evolution;probability;radiofrequency interference;radiowave propagation;railway communication;Rician channels;wireless channel modeling;radio propagation channel modeling;large-scale fading;small-scale fading;long-term evolution frequency division duplex railway testing network;downlink center frequency;measurement system;measurement scenario;measurement campaigns;train speed;measurement data;empirical path loss model;probability density functions;channel characteristics;statistical channel models;high-speed railway scenarios;frequency 465.0 MHz;frequency 5.0 MHz;Fading channels;Antenna measurements;Wireless communication;Rail transportation;Loss measurement;Probability density function;Rician channels;LTE-R;measurement;channel modeling;characteristic;HSR},\n  doi = {10.23919/EUSIPCO.2019.8903171},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531144.pdf},\n}\n\n
\n
\n\n\n
\n A practical and novel research of radio propagation channel modeling is mainly presented in this paper. This is the first comprehensive result of path loss, large-scale fading and small-scale fading under long-term evolution frequency division duplex railway testing network (LTE-R) whose downlink center frequency is set at 465MHz, bandwidth is 5MHz. Besides, a detailed description of the measurement system as well as the measurement scenario is introduced. Our measurement campaigns were carried out under multiple-input multiple-output with top train speed of 340kmffi. Based on the measurement data, the empirical path loss model is studied. Probability density functions of large-scale fading and small-scale fading are discussed. Channel characteristics such as path loss attenuation exponent, standard deviation of shadowing and Rician K-factor are shown. The statistical channel models are established to characterize the high-speed railway (HSR) scenarios which can contribute to evaluating and significantly promoting the network performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n HVS based perceptual pre-processing for video coding.\n \n \n \n \n\n\n \n Bhat, M.; Thiesse, J. -.; and Callet, P. L.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"HVSPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903172,\n  author = {M. Bhat and J. -M. Thiesse and P. L. Callet},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {HVS based perceptual pre-processing for video coding},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper proposes a perceptually optimized preprocessing technique using state of the art Human Visual System (HVS) model suitable for video compression to reduce bitrate at the same quality. Visual masking models to accurately account HVS has been considered. Frequencies which are visually indistinguishable and need not be encoded are removed within visibility threshold. The scheme is optimized for multiple viewing distances to consider real-world scenarios. Extensive subjective and objective evaluation has been conducted to evaluate proposed pre-processing. Investigation shows significant bit-rate savings compared to a professional real time HEVC video encoder.},\n  keywords = {data compression;video coding;visual masking models;perceptually optimized preprocessing technique;HVS based perceptual pre-processing;multiple viewing distances;video compression;human visual system model;video coding;professional real time HEVC video encoder;Visualization;Encoding;Channel models;Streaming media;Sensitivity;Adaptation models;Visual systems;Pre-processing;Contrast sensitivity function;Subjective test;Human Visual system;Visual masking},\n  doi = {10.23919/EUSIPCO.2019.8903172},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533716.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a perceptually optimized preprocessing technique using state of the art Human Visual System (HVS) model suitable for video compression to reduce bitrate at the same quality. Visual masking models to accurately account HVS has been considered. Frequencies which are visually indistinguishable and need not be encoded are removed within visibility threshold. The scheme is optimized for multiple viewing distances to consider real-world scenarios. Extensive subjective and objective evaluation has been conducted to evaluate proposed pre-processing. Investigation shows significant bit-rate savings compared to a professional real time HEVC video encoder.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Localized No-Reference Blurriness Measure for Omnidirectional Images and Video.\n \n \n \n \n\n\n \n Fassold, H.; and Wechtitsch, S.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903173,\n  author = {H. Fassold and S. Wechtitsch},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Localized No-Reference Blurriness Measure for Omnidirectional Images and Video},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Blurriness is a defect commonly occurring in conventional video but also in omnidirectional video. In this work, we propose a novel no-reference blurriness measure for images captured with omnidirectional video cameras. These images present unique challenges for quality measures due to their size and due to the equirectangular projection which is commonly employed for them. We base upon a state of the art algorithm and adapt it for the specifics of omnidirectional images. Furthermore, we extend it with a coarse-scale blurriness map for measuring spatially varying blur. We present a novel ground truth dataset which was generated by adding spatially varying gaussian blur of different magnitude in a viewport-centric way. Experiments with the proposed algorithm on this dataset show a strong correlation of the localized blurriness measure with the ground truth.},\n  keywords = {feature extraction;image enhancement;image resolution;image restoration;solid modelling;video cameras;video signal processing;omnidirectional video cameras;quality measures;equirectangular projection;omnidirectional images;coarse-scale blurriness map;localized blurriness measure;localized no-reference blurriness measure;omnidirectional video;Image edge detection;Signal processing algorithms;Cameras;Distortion;Visualization;Distortion measurement;image quality measure;no-reference blur assessment;omnidirectional image;360°;video;VR},\n  doi = {10.23919/EUSIPCO.2019.8903173},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528039.pdf},\n}\n\n
\n
\n\n\n
\n Blurriness is a defect commonly occurring in conventional video but also in omnidirectional video. In this work, we propose a novel no-reference blurriness measure for images captured with omnidirectional video cameras. These images present unique challenges for quality measures due to their size and due to the equirectangular projection which is commonly employed for them. We base upon a state of the art algorithm and adapt it for the specifics of omnidirectional images. Furthermore, we extend it with a coarse-scale blurriness map for measuring spatially varying blur. We present a novel ground truth dataset which was generated by adding spatially varying gaussian blur of different magnitude in a viewport-centric way. Experiments with the proposed algorithm on this dataset show a strong correlation of the localized blurriness measure with the ground truth.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perception of non-native phoneme contrasts in 8-13 months infants: tensor-based analysis of EEG signals.\n \n \n \n \n\n\n \n Aghabeig, M.; Bałaj, B.; Dreszer, J.; Lewandowska, M.; Milner, R.; Pawlaczyk, N.; Piotrowski, T.; Szmytke, M.; and Duch, W.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"PerceptionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903174,\n  author = {M. Aghabeig and B. Bałaj and J. Dreszer and M. Lewandowska and R. Milner and N. Pawlaczyk and T. Piotrowski and M. Szmytke and W. Duch},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Perception of non-native phoneme contrasts in 8-13 months infants: tensor-based analysis of EEG signals},\n  year = {2019},\n  pages = {1-5},\n  abstract = {Result of analysis of EEG responses of infants between 8-13 months of age to the syllable speech sounds are presented. We conducted an ERP experiment with an oddball paradigm consisting of two types of deviant stimuli (easy and hard) and standard stimulus. A nonnegative Tucker tensor decomposition (NTD) was used to characterize differences in processing of stimuli using a time-frequency-spatial (multi-domain) features. We extracted the multi-domain features for a reliable representation of the underlying infant brain activity to analyze the processing of standard and deviant stimuli. between standard and deviant stimuli and may be interpreted in terms of mismatch negativity (MMN) and acoustic change complex (ACC) evoked potentials. Moreover, these results serve as a proof-of-concept for application of tensor decomposition-based analyses for challenging infant EEG data.},\n  keywords = {auditory evoked potentials;electroencephalography;medical signal processing;neurophysiology;paediatrics;speech processing;tensors;ERP experiment;oddball paradigm;nonnegative Tucker tensor decomposition;multidomain features;tensor decomposition-based analyses;infant EEG data;nonnative phoneme;tensor-based analysis;syllable speech sounds;infant brain activity;time-frequency-spatial features;mismatch negativity;acoustic change complex evoked potentials;time 8.0 month to 13.0 month;Tensors;Standards;Electroencephalography;Feature extraction;Matrix decomposition;Time-frequency analysis;Electrodes;infant EEG;event-related potentials;time-frequency-spatial features;nonnegative Tucker tensor decomposition},\n  doi = {10.23919/EUSIPCO.2019.8903174},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570532687.pdf},\n}\n\n
\n
\n\n\n
\n Result of analysis of EEG responses of infants between 8-13 months of age to the syllable speech sounds are presented. We conducted an ERP experiment with an oddball paradigm consisting of two types of deviant stimuli (easy and hard) and standard stimulus. A nonnegative Tucker tensor decomposition (NTD) was used to characterize differences in processing of stimuli using a time-frequency-spatial (multi-domain) features. We extracted the multi-domain features for a reliable representation of the underlying infant brain activity to analyze the processing of standard and deviant stimuli. between standard and deviant stimuli and may be interpreted in terms of mismatch negativity (MMN) and acoustic change complex (ACC) evoked potentials. Moreover, these results serve as a proof-of-concept for application of tensor decomposition-based analyses for challenging infant EEG data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spatial and Hierarchical Riemannian Dimensionality Reduction and Dictionary Learning for Segmenting Multichannel Images.\n \n \n \n \n\n\n \n Fallah, F.; Armanious, K.; Yang, B.; and Bamberg, F.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SpatialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903175,\n  author = {F. Fallah and K. Armanious and B. Yang and F. Bamberg},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Spatial and Hierarchical Riemannian Dimensionality Reduction and Dictionary Learning for Segmenting Multichannel Images},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In this paper, we proposed an automated method for segmenting objects of weak boundaries and similar intensities on volumetric multichannel images. This method relied on a multiresolution classifier that tackled class overlaps by using the Riemannian geometry of the RCDs of the multiscale patches of every multichannel image and reducing the dimensionality of these RCDs through a novel method that incorporated the intra-and inter-class neighborhoods of the RCDs in the Riemannian space and the spatial and hierarchical relationships between their corresponding patches. The reduced dimensional RCDs were then used to learn resolution-specific dictionaries for coding and classifications. To speed up the optimizations and to avoid convergence to local extrema, the dictionaries and the codes got initialized by a novel scheme that used the Riemannian geometry of the RCDs. This method was evaluated on the challenging task of segmenting cardiac adipose tissues on fat-water MR images.},\n  keywords = {biological tissues;biomedical MRI;cardiology;image classification;image coding;image resolution;image segmentation;medical image processing;cardiac adipose tissues;reduced dimensional RCDs;interclass neighborhoods;spatial Riemannian dimensionality reduction;hierarchical Riemannian dimensionality reduction;fat-water MR images;resolution-specific dictionaries;hierarchical relationships;spatial relationships;Riemannian space;multiscale patches;Riemannian geometry;class overlaps;multiresolution classifier;volumetric multichannel images;automated method;dictionary learning;Training;Image segmentation;Image resolution;Dictionaries;Manifolds;Dimensionality reduction;Feature extraction;Riemannian Manifolds;Nonlinear Dimensionality Reduction;Dictionary Learning;Locality Constrained Coding;Segmenting Multichannel Images},\n  doi = {10.23919/EUSIPCO.2019.8903175},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531329.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we proposed an automated method for segmenting objects of weak boundaries and similar intensities on volumetric multichannel images. This method relied on a multiresolution classifier that tackled class overlaps by using the Riemannian geometry of the RCDs of the multiscale patches of every multichannel image and reducing the dimensionality of these RCDs through a novel method that incorporated the intra-and inter-class neighborhoods of the RCDs in the Riemannian space and the spatial and hierarchical relationships between their corresponding patches. The reduced dimensional RCDs were then used to learn resolution-specific dictionaries for coding and classifications. To speed up the optimizations and to avoid convergence to local extrema, the dictionaries and the codes got initialized by a novel scheme that used the Riemannian geometry of the RCDs. This method was evaluated on the challenging task of segmenting cardiac adipose tissues on fat-water MR images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-scale Aggregation of Phase Information for Complexity Reduction of CNN Based DOA Estimation.\n \n \n \n \n\n\n \n Chakrabarty, S.; and Habets, E. A. P.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-scalePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903176,\n  author = {S. Chakrabarty and E. A. P. Habets},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-scale Aggregation of Phase Information for Complexity Reduction of CNN Based DOA Estimation},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In a recent work on direction-of-arrival (DOA) estimation of multiple speakers with convolutional neural networks (CNNs), the phase component of short-time Fourier transform (STFT) coefficients of the microphone signal is given as input and small filters are used to learn the phase relations between neighboring microphones. Due to the chosen filter size, M -1 convolution layers are required to achieve the best performance for a microphone array with M microphones. For arrays with large number of microphones, this requirement leads to a high computational cost making the method practically infeasible. In this work, we propose to expand the receptive field of the filters to reduce the computational cost of our previously proposed method. To realize this expansion, we use systematic dilations of the filters in each of the convolution layers. Different systematic dilation strategies for a specific microphone array are explored. Experimental analysis of the different strategies, shows that an aggressive expansion strategy results in a considerable reduction in computational cost while a relatively gradual expansion of the receptive field exhibits the best DOA estimation performance along with reduction in the computational cost.},\n  keywords = {convolutional neural nets;direction-of-arrival estimation;filtering theory;Fourier transforms;microphone arrays;phase information;complexity reduction;DOA estimation;direction-of-arrival estimation;multiple speakers;convolutional neural networks;CNN;microphone signal;filter size;multiscale aggregation;systematic dilation strategies;short-time Fourier transform;Convolution;Direction-of-arrival estimation;Estimation;Microphone arrays;Computer architecture;Task analysis;CNN;source localization;DOA;multi-scale aggregation},\n  doi = {10.23919/EUSIPCO.2019.8903176},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533338.pdf},\n}\n\n
\n
\n\n\n
\n In a recent work on direction-of-arrival (DOA) estimation of multiple speakers with convolutional neural networks (CNNs), the phase component of short-time Fourier transform (STFT) coefficients of the microphone signal is given as input and small filters are used to learn the phase relations between neighboring microphones. Due to the chosen filter size, M -1 convolution layers are required to achieve the best performance for a microphone array with M microphones. For arrays with large number of microphones, this requirement leads to a high computational cost making the method practically infeasible. In this work, we propose to expand the receptive field of the filters to reduce the computational cost of our previously proposed method. To realize this expansion, we use systematic dilations of the filters in each of the convolution layers. Different systematic dilation strategies for a specific microphone array are explored. Experimental analysis of the different strategies, shows that an aggressive expansion strategy results in a considerable reduction in computational cost while a relatively gradual expansion of the receptive field exhibits the best DOA estimation performance along with reduction in the computational cost.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Probabilistic Tensor Train Decomposition.\n \n \n \n \n\n\n \n Hinrich, J. L.; and Mørup, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"ProbabilisticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903177,\n  author = {J. L. Hinrich and M. Mørup},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Probabilistic Tensor Train Decomposition},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The tensor train decomposition (TTD) has become an attractive decomposition approach due to its ease of inference by use of the singular value decomposition and flexible yet compact representations enabling efficient computations and reduced memory usage using the TTD representation for further analyses. Unfortunately, the level of complexity to use and the order in which modes should be decomposed using the TTD is unclear. We advance TTD to a fully probabilistic TTD (PTTD) using variational Bayesian inference to account for parameter uncertainty and noise. In particular, we exploit that the PTTD enables model comparisons by use of the evidence lower bound (ELBO) of the variational approximation. On synthetic data with ground truth structure and a real 3-way fluorescence spectroscopy dataset, we demonstrate how the ELBO admits quantification of model specification not only in terms of numbers of components for each factor in the TTD, but also a suitable order of the modes in which the TTD should be employed. The proposed PTTD provides a principled framework for the characterization of model uncertainty, complexity, and model- and mode-order when compressing tensor data using the TTD.},\n  keywords = {approximation theory;Bayes methods;inference mechanisms;singular value decomposition;tensors;variational techniques;reduced memory usage;TTD representation;PTTD;variational Bayesian inference;tensor data;probabilistic tensor train decomposition;attractive decomposition approach;singular value decomposition;probabilistic TTD;Tensors;Probabilistic logic;Mathematical model;Bayes methods;Data models;Matrix decomposition;Computational modeling;Bayesian inference;tensor train decomposition;matrix product state;multi-modal data},\n  doi = {10.23919/EUSIPCO.2019.8903177},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530640.pdf},\n}\n\n
\n
\n\n\n
\n The tensor train decomposition (TTD) has become an attractive decomposition approach due to its ease of inference by use of the singular value decomposition and flexible yet compact representations enabling efficient computations and reduced memory usage using the TTD representation for further analyses. Unfortunately, the level of complexity to use and the order in which modes should be decomposed using the TTD is unclear. We advance TTD to a fully probabilistic TTD (PTTD) using variational Bayesian inference to account for parameter uncertainty and noise. In particular, we exploit that the PTTD enables model comparisons by use of the evidence lower bound (ELBO) of the variational approximation. On synthetic data with ground truth structure and a real 3-way fluorescence spectroscopy dataset, we demonstrate how the ELBO admits quantification of model specification not only in terms of numbers of components for each factor in the TTD, but also a suitable order of the modes in which the TTD should be employed. The proposed PTTD provides a principled framework for the characterization of model uncertainty, complexity, and model- and mode-order when compressing tensor data using the TTD.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Novel Particle Filter for High-Dimensional Systems Using Penalized Perturbations.\n \n \n \n \n\n\n \n El-Laham, Y.; Krayem, Z.; Maghakian, J.; and Bugallo, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903178,\n  author = {Y. El-Laham and Z. Krayem and J. Maghakian and M. Bugallo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {A Novel Particle Filter for High-Dimensional Systems Using Penalized Perturbations},\n  year = {2019},\n  pages = {1-5},\n  abstract = {In order to efficiently perform inference on high-dimensional nonlinear non-Gaussian state-space models using particle filtering, it is critical that particles are generated from the optimal proposal distribution. However, finding a closed-form to the optimal proposal proves to be difficult in practice, as many application problems do not satisfy the requirement of conjugate state and observation equations. In this paper, we overcome this challenge by designing a novel method that introduces conjugate artificial noise into the system and optimally perturbs the particles in a way that balances a bias-variance tradeoff. Our method is validated through extensive numerical simulations applied to a gene regulatory network problem, and results show better performance than that of state-of-the-art methods, especially in cases where the state noise is heavy-tailed.},\n  keywords = {nonlinear filters;particle filtering (numerical methods);state-space methods;particle filter;penalized perturbations;nonGaussian state-space models;particle filtering;optimal proposal distribution;observation equations;conjugate artificial noise;bias-variance tradeoff;gene regulatory network problem;Perturbation methods;Mathematical model;Covariance matrices;Proposals;Linear programming;State-space methods;State estimation},\n  doi = {10.23919/EUSIPCO.2019.8903178},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533892.pdf},\n}\n\n
\n
\n\n\n
\n In order to efficiently perform inference on high-dimensional nonlinear non-Gaussian state-space models using particle filtering, it is critical that particles are generated from the optimal proposal distribution. However, finding a closed-form to the optimal proposal proves to be difficult in practice, as many application problems do not satisfy the requirement of conjugate state and observation equations. In this paper, we overcome this challenge by designing a novel method that introduces conjugate artificial noise into the system and optimally perturbs the particles in a way that balances a bias-variance tradeoff. Our method is validated through extensive numerical simulations applied to a gene regulatory network problem, and results show better performance than that of state-of-the-art methods, especially in cases where the state noise is heavy-tailed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficiency of the bio-inspired Leaky Integrate-and-Fire neuron for signal coding.\n \n \n \n \n\n\n \n Doutsi, E.; Fillatre, L.; and Antonini, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"EfficiencyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903179,\n  author = {E. Doutsi and L. Fillatre and M. Antonini},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficiency of the bio-inspired Leaky Integrate-and-Fire neuron for signal coding},\n  year = {2019},\n  pages = {1-5},\n  abstract = {The goal of this paper is to investigate whether purely neuro-mimetic architectures are more efficient for signal compression than architectures that combine neuroscience and state-of-the-art models. We are motivated to produce spikes, using the LIF model, in order to compress images. Seeking solutions to improve the efficiency of the LIF in terms of the memory cost, we compare two different quantization approaches; the Neuroinspired Quantization (NQ) and the Conventional Quantization (CQ). We present that when the LIF model and the NQ appear in the same architecture, the performance of the compression system is higher compared to an architecture that consists of the LIF model and the CQ. The main reason of this occurrence is the dynamic properties embedded in the neuro-mimetic models. As a consequence, we first study which are the dynamic properties of the recently released (NQ) which is an intuitive way of counting the number of spikes. Moreover, we show that some parameters of the NQ (i.e. the observation window and the resistance) strongly influence its behavior that ranges from non-uniform to uniform. As a result, the NQ is more flexible than the CQ when it is applied to real data while for the same bit rate it ensures higher reconstruction quality.},\n  keywords = {brain models;data compression;feedforward neural nets;image coding;image reconstruction;medical image processing;neurophysiology;quantisation (signal);signal coding;neuro-mimetic architectures;signal compression;combine neuroscience;LIF model;NQ;conventional quantization;CQ;compression system;dynamic properties;neuro-mimetic models;neuroinspired quantization;bio-inspired leaky integrate-and-fire neuron;Quantization (signal);Delays;Image reconstruction;Neurons;Computer architecture;Microsoft Windows;Neuro-inspired quantization;uniform scalar quantizer;Leaky Integrate-and-Fire (LIF);spikes;entropy},\n  doi = {10.23919/EUSIPCO.2019.8903179},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530160.pdf},\n}\n\n
\n
\n\n\n
\n The goal of this paper is to investigate whether purely neuro-mimetic architectures are more efficient for signal compression than architectures that combine neuroscience and state-of-the-art models. We are motivated to produce spikes, using the LIF model, in order to compress images. Seeking solutions to improve the efficiency of the LIF in terms of the memory cost, we compare two different quantization approaches; the Neuroinspired Quantization (NQ) and the Conventional Quantization (CQ). We present that when the LIF model and the NQ appear in the same architecture, the performance of the compression system is higher compared to an architecture that consists of the LIF model and the CQ. The main reason of this occurrence is the dynamic properties embedded in the neuro-mimetic models. As a consequence, we first study which are the dynamic properties of the recently released (NQ) which is an intuitive way of counting the number of spikes. Moreover, we show that some parameters of the NQ (i.e. the observation window and the resistance) strongly influence its behavior that ranges from non-uniform to uniform. As a result, the NQ is more flexible than the CQ when it is applied to real data while for the same bit rate it ensures higher reconstruction quality.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Scheduling Moldable Parallel Streaming Tasks on Heterogeneous Platforms with Frequency Scaling.\n \n \n \n \n\n\n \n Litzinger, S.; Keller, J.; and Kessler, C.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"SchedulingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903180,\n  author = {S. Litzinger and J. Keller and C. Kessler},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Scheduling Moldable Parallel Streaming Tasks on Heterogeneous Platforms with Frequency Scaling},\n  year = {2019},\n  pages = {1-5},\n  abstract = {We extend static scheduling of parallelizable tasks to machines with multiple core types, taking differences in performance and power consumption due to task type into account. Next to energy minimization for given deadline, i.e. for given throughput requirement, we consider makespan minimization for given energy or average power budgets. We evaluate our approach by comparing schedules of synthetic task sets for big.LITTLE with other schedulers from literature. We achieve an improvement of up to 33%.},\n  keywords = {minimisation;power aware computing;power consumption;processor scheduling;scheduling moldable parallel streaming tasks;heterogeneous platforms;frequency scaling;static scheduling;parallelizable tasks;multiple core types;power consumption;energy minimization;synthetic task sets;power budgets;Task analysis;Processor scheduling;Power demand;Runtime;Multicore processing;Scheduling;Throughput;static scheduling;energy-efficient execution;streaming tasks;heterogeneous platform},\n  doi = {10.23919/EUSIPCO.2019.8903180},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570528910.pdf},\n}\n\n
\n
\n\n\n
\n We extend static scheduling of parallelizable tasks to machines with multiple core types, taking differences in performance and power consumption due to task type into account. Next to energy minimization for given deadline, i.e. for given throughput requirement, we consider makespan minimization for given energy or average power budgets. We evaluate our approach by comparing schedules of synthetic task sets for big.LITTLE with other schedulers from literature. We achieve an improvement of up to 33%.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n DEFACTO: Image and Face Manipulation Dataset.\n \n \n \n \n\n\n \n MAHFOUDI, G.; TAJINI, B.; RETRAINT, F.; MORAIN-NICOLIER, F.; DUGELAY, J. L.; and PIC, M.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"DEFACTO:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903181,\n  author = {G. MAHFOUDI and B. TAJINI and F. RETRAINT and F. MORAIN-NICOLIER and J. L. DUGELAY and M. PIC},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {DEFACTO: Image and Face Manipulation Dataset},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper presents a novel dataset for image and face manipulation detection and localization called DEFACTO. The dataset was automatically generated using Microsoft common object in context database (MSCOCO) to produce semantically meaningful forgeries. Four categories of forgeries have been generated. Splicing forgeries which consist of inserting an external element into an image, copy-move forgeries where an element within an image is duplicated, object removal forgeries where objects are removed from images and lastly morphing where two images are warped and blended together. Over 200000 images have been generated and each image is accompanied by several annotations allowing precise localization of the forgery and information about the tampering process.},\n  keywords = {face recognition;image forensics;image morphing;object detection;MSCOCO;splicing forgeries;copy-move forgeries;object removal forgeries;DEFACTO;face manipulation;image manipulation;Microsoft common object in context database;tampering process;Forgery;Splicing;Faces;Annotations;Image segmentation;Forensics;Image color analysis;Image Forensics;Copy-Move;Splicing;Inpainting;Object-Removal;Face Morphing;Face-Swapping},\n  doi = {10.23919/EUSIPCO.2019.8903181},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570533790.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a novel dataset for image and face manipulation detection and localization called DEFACTO. The dataset was automatically generated using Microsoft common object in context database (MSCOCO) to produce semantically meaningful forgeries. Four categories of forgeries have been generated. Splicing forgeries which consist of inserting an external element into an image, copy-move forgeries where an element within an image is duplicated, object removal forgeries where objects are removed from images and lastly morphing where two images are warped and blended together. Over 200000 images have been generated and each image is accompanied by several annotations allowing precise localization of the forgery and information about the tampering process.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Power-Efficient Secure Beamforming in Cognitive Satellite-Terrestrial Networks.\n \n \n \n \n\n\n \n Lu, W.; An, K.; Yan, X.; and Liang, T.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Power-EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903182,\n  author = {W. Lu and K. An and X. Yan and T. Liang},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Power-Efficient Secure Beamforming in Cognitive Satellite-Terrestrial Networks},\n  year = {2019},\n  pages = {1-5},\n  abstract = {This paper proposes a power-efficient secure beamforming (BF) algorithm for cognitive satellite-terrestrial wireless networks, where the satellite network termed as the primary network under the intercept of an eavesdropper shares the spectrum of the primary network with the terrestrial secondary network. Specifically, we propose a optimal BF scheme with the objective of minimizing the transmit power of the terrestrial base station (BS), while meeting the secrecy rate constraint of the primary user and the communication rate constraint of the secondary user. Then, we use constraint transformation and semidefinite relaxation (SDR) method to convert the nonconvex optimization problem into the convex optimization problem. Thus, the optimal BF weight vector can be solved by semidefinite program (SDP) method. Finally, the results of computer simulation verify the effectiveness and superiority of the proposed BF algorithm.},\n  keywords = {amplify and forward communication;array signal processing;cognitive radio;concave programming;convex programming;satellite communication;telecommunication security;vectors;cognitive satellite-terrestrial networks;power-efficient secure beamforming algorithm;satellite network;primary network;eavesdropper shares;terrestrial secondary network;optimal BF scheme;transmit power;terrestrial base station;secrecy rate constraint;primary user;communication rate constraint;secondary user;constraint transformation;semidefinite relaxation method;nonconvex optimization problem;convex optimization problem;optimal BF weight vector;BF algorithm;Satellite broadcasting;Optimization;Security;Receivers;Satellites;Wireless communication;Physical layer security;satellite communication;power-efficient;convex optimization;secrecy rate},\n  doi = {10.23919/EUSIPCO.2019.8903182},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570529499.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a power-efficient secure beamforming (BF) algorithm for cognitive satellite-terrestrial wireless networks, where the satellite network termed as the primary network under the intercept of an eavesdropper shares the spectrum of the primary network with the terrestrial secondary network. Specifically, we propose a optimal BF scheme with the objective of minimizing the transmit power of the terrestrial base station (BS), while meeting the secrecy rate constraint of the primary user and the communication rate constraint of the secondary user. Then, we use constraint transformation and semidefinite relaxation (SDR) method to convert the nonconvex optimization problem into the convex optimization problem. Thus, the optimal BF weight vector can be solved by semidefinite program (SDP) method. Finally, the results of computer simulation verify the effectiveness and superiority of the proposed BF algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perfusion-based Brain Connectivity: PASL vs pCASL.\n \n \n \n \n\n\n \n Blasi, B. D.; Barnes, A.; Storti, S. F.; Koepp, M.; Menegaz, G.; Vita, E. D.; and Galazzo, I. B.\n\n\n \n\n\n\n In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-4, Sep. 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Perfusion-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8903183,\n  author = {B. D. Blasi and A. Barnes and S. F. Storti and M. Koepp and G. Menegaz and E. D. Vita and I. B. Galazzo},\n  booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n  title = {Perfusion-based Brain Connectivity: PASL vs pCASL},\n  year = {2019},\n  pages = {1-4},\n  abstract = {Arterial Spin Labelling (ASL) is a magnetic resonance imaging technique which provides a more direct measure of neural activity as compared to blood-oxygenation-level-dependent (BOLD) contrast. While it has been used for years for perfusion quantification, ASL has recently been adopted for functional connectivity (FC) analyses. However, the impact of the different ASL schemes on connectivity estimates remains to be fully investigated. In this work, pulsed and pseudo continuous ASL (PASL/pCASL) were compared in terms of cerebral blood flow (CBF) and FC measures. In line with literature, higher CBF and increased spatial signal-to-noise ratio were reported for pCASL, as compared to PASL. In terms of FC, pCASL was able to more reliably recover the main networks and showed higher correlations between brain areas. These preliminary results suggest pCASL to provide reliable and stable results, not only for CBF estimation but also for FC analyses.},\n  keywords = {biomedical MRI;blood;blood vessels;brain;haemodynamics;haemorheology;medical image processing;neurophysiology;perfusion-based brain connectivity;magnetic resonance imaging technique;pulsed arterial spin labelling;direct measure;neural activity;blood-oxygenation-level-dependent contrast;BOLD;perfusion quantification;functional connectivity analyses;connectivity estimates;pseudocontinuous ASL;spatial signal-to-noise ratio;brain areas;cerebral blood flow estimation;ASL schemes;ASL;PASL;pCASL;CBF;Functional Connectivity},\n  doi = {10.23919/EUSIPCO.2019.8903183},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570530212.pdf},\n}\n\n
\n
\n\n\n
\n Arterial Spin Labelling (ASL) is a magnetic resonance imaging technique which provides a more direct measure of neural activity as compared to blood-oxygenation-level-dependent (BOLD) contrast. While it has been used for years for perfusion quantification, ASL has recently been adopted for functional connectivity (FC) analyses. However, the impact of the different ASL schemes on connectivity estimates remains to be fully investigated. In this work, pulsed and pseudo continuous ASL (PASL/pCASL) were compared in terms of cerebral blood flow (CBF) and FC measures. In line with literature, higher CBF and increased spatial signal-to-noise ratio were reported for pCASL, as compared to PASL. In terms of FC, pCASL was able to more reliably recover the main networks and showed higher correlations between brain areas. These preliminary results suggest pCASL to provide reliable and stable results, not only for CBF estimation but also for FC analyses.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);