var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https%3A%2F%2Fraw.githubusercontent.com%2FRoznn%2FEUSIPCO%2Fmain%2Feusipco2016url.bib&jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https%3A%2F%2Fraw.githubusercontent.com%2FRoznn%2FEUSIPCO%2Fmain%2Feusipco2016url.bib&jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https%3A%2F%2Fraw.githubusercontent.com%2FRoznn%2FEUSIPCO%2Fmain%2Feusipco2016url.bib&jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2016\n \n \n (491)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Kernel adaptive filtering subject to equality function constraints.\n \n \n \n \n\n\n \n Chen, B.; Qin, Z.; Zheng, N.; and Príncipe, J. C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1-5, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"KernelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760198,\n  author = {B. Chen and Z. Qin and N. Zheng and J. C. Príncipe},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Kernel adaptive filtering subject to equality function constraints},\n  year = {2016},\n  pages = {1-5},\n  abstract = {Kernel adaptive filters (KAFs) are powerful tools for online nonlinear system modeling, which are direct extensions of traditional linear adaptive filters in kernel space, with growing linear-in-the-parameters (LIP) structure. However, like most other nonlinear adaptive filters, the KAFs are “black box” models where no prior information about the unknown nonlinear system is utilized. If some prior information is available, the “grey box” models may achieve improved performance. In this work, we consider the kernel adaptive filtering with prior information in terms of equality function constraints. A novel Mercer kernel, called the constrained Mercer kernel (CMK), is proposed. With this new kernel, we develop the kernel least mean square subject to equality function constraints (KLMS-EFC), which can satisfy the constraints perfectly while achieving significant performance improvement.},\n  keywords = {adaptive filters;least mean squares methods;nonlinear filters;kernel adaptive filtering;equality function constraints;online nonlinear system modeling;kernel space;linear-in-the-parameters structure;LIP structure;nonlinear adaptive filters;grey box;prior information;constrained Mercer kernel;kernel least mean square;Kernel;Signal processing algorithms;Adaptation models;Testing;Adaptive filters;Adaptive equalizers;Dictionaries;Kernel adaptive filtering;kernel least mean square;equality function constraints},\n  doi = {10.1109/EUSIPCO.2016.7760198},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255893.pdf},\n}\n\n
\n
\n\n\n
\n Kernel adaptive filters (KAFs) are powerful tools for online nonlinear system modeling, which are direct extensions of traditional linear adaptive filters in kernel space, with growing linear-in-the-parameters (LIP) structure. However, like most other nonlinear adaptive filters, the KAFs are “black box” models where no prior information about the unknown nonlinear system is utilized. If some prior information is available, the “grey box” models may achieve improved performance. In this work, we consider the kernel adaptive filtering with prior information in terms of equality function constraints. A novel Mercer kernel, called the constrained Mercer kernel (CMK), is proposed. With this new kernel, we develop the kernel least mean square subject to equality function constraints (KLMS-EFC), which can satisfy the constraints perfectly while achieving significant performance improvement.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A low-complexity RLS-DCD algorithm for volterra system identification.\n \n \n \n \n\n\n \n Claser, R.; Nascimento, V. H.; and Zakharov, Y. V.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 6-10, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760199,\n  author = {R. Claser and V. H. Nascimento and Y. V. Zakharov},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A low-complexity RLS-DCD algorithm for volterra system identification},\n  year = {2016},\n  pages = {6-10},\n  abstract = {Adaptive filters for Volterra system identification must deal with two difficulties: large filter length M (resulting in high computational complexity and low convergence rate) and high correlation in the input sequence. The second problem is minimized by using the recursive least-squares algorithm (RLS), however, its large computation complexity (O(M2)) might be prohibitive in some applications. We propose here a low-complexity RLS algorithm, based on the dichotomous coordinate descent algorithm (DCD), showing that in some situations the computational complexity is reduced to O(M). The new algorithm is compared to the standard RLS, normalized least-mean squares (NLMS) and affine projections (AP) algorithms.},\n  keywords = {adaptive filters;computational complexity;correlation methods;least squares approximations;nonlinear filters;recursive filters;low-complexity RLS-DCD algorithm;Volterra system identification;adaptive filters;filter length;computational complexity;convergence rate;sequence correlation;recursive least-squares algorithm;dichotomous coordinate descent algorithm;Signal processing algorithms;Delay lines;Kernel;Computational complexity;Adaptation models;Signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760199},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256286.pdf},\n}\n\n
\n
\n\n\n
\n Adaptive filters for Volterra system identification must deal with two difficulties: large filter length M (resulting in high computational complexity and low convergence rate) and high correlation in the input sequence. The second problem is minimized by using the recursive least-squares algorithm (RLS), however, its large computation complexity (O(M2)) might be prohibitive in some applications. We propose here a low-complexity RLS algorithm, based on the dichotomous coordinate descent algorithm (DCD), showing that in some situations the computational complexity is reduced to O(M). The new algorithm is compared to the standard RLS, normalized least-mean squares (NLMS) and affine projections (AP) algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Integrated direct sub-band adaptive volterra filter and its application to identification of loudspeaker nonlinearity.\n \n \n \n \n\n\n \n Kinoshita, S.; and Kajikawa, Y.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 11-15, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"IntegratedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760200,\n  author = {S. Kinoshita and Y. Kajikawa},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Integrated direct sub-band adaptive volterra filter and its application to identification of loudspeaker nonlinearity},\n  year = {2016},\n  pages = {11-15},\n  abstract = {In this paper, we propose a novel realization of sub-band adaptive Volterra filter, which consists of input signal transformation block and only one adaptive Volterra filter. The proposed realization can focus on major frequency band, in which a target nonlinear system has dominant components, by changing the number of taps in each sub-band in order to simultaneously realize high computational efficiency and high identification performance. The proposed realization of sub-band adaptive Volterra filter is applied to the identification of electro-dynamic loudspeaker systems and the effectiveness is demonstrated through some simulations. Simulation results show that the proposed realization can significantly improve the estimation accuracy.},\n  keywords = {acoustic signal processing;adaptive filters;loudspeakers;nonlinear filters;direct subband adaptive Volterra filter;loudspeaker nonlinearity identification;signal transformation block;target nonlinear system;computational efficiency;electrodynamic loudspeaker system identification;estimation accuracy improvement;Computational complexity;Loudspeakers;Symmetric matrices;Device-to-device communication;Kernel;Nonlinear distortion},\n  doi = {10.1109/EUSIPCO.2016.7760200},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255900.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a novel realization of sub-band adaptive Volterra filter, which consists of input signal transformation block and only one adaptive Volterra filter. The proposed realization can focus on major frequency band, in which a target nonlinear system has dominant components, by changing the number of taps in each sub-band in order to simultaneously realize high computational efficiency and high identification performance. The proposed realization of sub-band adaptive Volterra filter is applied to the identification of electro-dynamic loudspeaker systems and the effectiveness is demonstrated through some simulations. Simulation results show that the proposed realization can significantly improve the estimation accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Design of dynamic linear-in-the-parameters nonlinear filters for active noise control.\n \n \n \n \n\n\n \n Patel, V.; and George, N. V.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 16-20, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DesignPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760201,\n  author = {V. Patel and N. V. George},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Design of dynamic linear-in-the-parameters nonlinear filters for active noise control},\n  year = {2016},\n  pages = {16-20},\n  abstract = {Traditional active noise control (ANC) systems, which uses a fixed tap length adaptive filter as the controller may lead to non optimal noise mitigation. In addition, the conventional filtered-x least mean square algorithm based ANC schemes fail to effectively perform noise cancellation in the presence of nonlinearities in the ANC environment. In order to overcome these limitations of traditional ANC techniques, in this paper, we propose a class of dynamic nonlinear ANC systems, which adapts itself to the noise cancellation scenario. The dynamic behaviour has been achieved by developing a variable tap length and variable learning rate adaptive algorithms for functional link artificial neural network (FLANN) and generalized FLANN (GFLANN) based ANC systems. The proposed ANC schemes have been shown through a simulation study to provide an optimal convergence behaviour. This improvement has been achieved by providing a balance between the number of filter coefficients and the mean square error.},\n  keywords = {active noise control;adaptive filters;mean square error methods;neural nets;nonlinear filters;mean square error;filter coefficient;GFLANN;generalized functional link artificial neural network;variable learning rate adaptive algorithm;dynamic nonlinear ANC system;noise cancellation;filtered-x least mean square algorithm;nonoptimal noise mitigation;fixed tap length adaptive filter;active noise control;dynamic linear-in-the-parameters nonlinear filter design;Signal processing algorithms;Noise cancellation;Europe;Heuristic algorithms;Algorithm design and analysis;Adaptive algorithms;Active noise control;functional link artificial neural network;GFLANN},\n  doi = {10.1109/EUSIPCO.2016.7760201},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250698.pdf},\n}\n\n
\n
\n\n\n
\n Traditional active noise control (ANC) systems, which uses a fixed tap length adaptive filter as the controller may lead to non optimal noise mitigation. In addition, the conventional filtered-x least mean square algorithm based ANC schemes fail to effectively perform noise cancellation in the presence of nonlinearities in the ANC environment. In order to overcome these limitations of traditional ANC techniques, in this paper, we propose a class of dynamic nonlinear ANC systems, which adapts itself to the noise cancellation scenario. The dynamic behaviour has been achieved by developing a variable tap length and variable learning rate adaptive algorithms for functional link artificial neural network (FLANN) and generalized FLANN (GFLANN) based ANC systems. The proposed ANC schemes have been shown through a simulation study to provide an optimal convergence behaviour. This improvement has been achieved by providing a balance between the number of filter coefficients and the mean square error.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Extension of Generalized Hammerstein model to non-polynomial inputs.\n \n \n \n \n\n\n \n Novak, A.; Simon, L.; and Lotton, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 21-25, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ExtensionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760202,\n  author = {A. Novak and L. Simon and P. Lotton},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Extension of Generalized Hammerstein model to non-polynomial inputs},\n  year = {2016},\n  pages = {21-25},\n  abstract = {The Generalized Hammerstein model has been successfully used during last few years in many physical applications to describe the behavior of a nonlinear system under test. The main advantage of such a nonlinear model is its capability to model efficiently nonlinear systems while keeping the computational cost low. On the other hand, this model can not predict complicated nonlinear behaviors such as hysteretic one. In this paper, we propose an extension of the Generalized Hammerstein model to a model with non polynomial nonlinear inputs that allows modeling more complicated nonlinear systems. A simulation provided in this paper shows a good agreement between the model and the hysteretic nonlinear system under test.},\n  keywords = {filtering theory;hysteresis;nonlinear functions;nonlinear systems;generalized Hammerstein model;nonpolynomial nonlinear inputs;nonlinear system modeling;hysteretic nonlinear system;nonpolynomial nonlinear function;linear filters;Hysteresis;Nonlinear systems;Computational modeling;Mathematical model;Harmonic analysis;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760202},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252218.pdf},\n}\n\n
\n
\n\n\n
\n The Generalized Hammerstein model has been successfully used during last few years in many physical applications to describe the behavior of a nonlinear system under test. The main advantage of such a nonlinear model is its capability to model efficiently nonlinear systems while keeping the computational cost low. On the other hand, this model can not predict complicated nonlinear behaviors such as hysteretic one. In this paper, we propose an extension of the Generalized Hammerstein model to a model with non polynomial nonlinear inputs that allows modeling more complicated nonlinear systems. A simulation provided in this paper shows a good agreement between the model and the hysteretic nonlinear system under test.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A mathematical analysis of the Genetic-AIRS classification algorithm.\n \n \n \n \n\n\n \n Mathioudakis, D.; Sotiropoulos, D.; and Tsihrintzis, G. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 26-30, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760203,\n  author = {D. Mathioudakis and D. Sotiropoulos and G. A. Tsihrintzis},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A mathematical analysis of the Genetic-AIRS classification algorithm},\n  year = {2016},\n  pages = {26-30},\n  abstract = {This paper presents the inception and the basic concepts of a hybrid classification algorithm called Genetic-AIRS [1]. Genetic-AIRS, is a combination of the Artificial Immune Resource System (AIRS) algorithm witch uses evolutionary computation techniques. An analysis is presented to determine the final algorithm architecture and parameters. The paper also includes an experimental evaluation on various publicly available datasets of Genetic-AIRS vs AIRS.},\n  keywords = {artificial immune systems;genetic algorithms;pattern classification;genetic-AIRS classification algorithm;artificial immune resource system algorithm;evolutionary computation techniques;Immune system;Detectors;Training;Signal processing algorithms;Genetic algorithms;Algorithm design and analysis;Artificial immune system;Genetic algorithm;Evolutionary computation;Machine learning;Classification},\n  doi = {10.1109/EUSIPCO.2016.7760203},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252103.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents the inception and the basic concepts of a hybrid classification algorithm called Genetic-AIRS [1]. Genetic-AIRS, is a combination of the Artificial Immune Resource System (AIRS) algorithm witch uses evolutionary computation techniques. An analysis is presented to determine the final algorithm architecture and parameters. The paper also includes an experimental evaluation on various publicly available datasets of Genetic-AIRS vs AIRS.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A system identification approach to determining listening attention from EEG signals.\n \n \n \n\n\n \n Alickovic, E.; Lunner, T.; and Gustafsson, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 31-35, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760204,\n  author = {E. Alickovic and T. Lunner and F. Gustafsson},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A system identification approach to determining listening attention from EEG signals},\n  year = {2016},\n  pages = {31-35},\n  abstract = {We still have very little knowledge about how our brains decouple different sound sources, which is known as solving the cocktail party problem. Several approaches; including ERP, time-frequency analysis and, more recently, regression and stimulus reconstruction approaches; have been suggested for solving this problem. In this work, we study the problem of correlating of EEG signals to different sets of sound sources with the goal of identifying the single source to which the listener is attending. Here, we propose a method for finding the number of parameters needed in a regression model to avoid overlearning, which is necessary for determining the attended sound source with high confidence in order to solve the cocktail party problem.},\n  keywords = {bioelectric phenomena;electroencephalography;regression analysis;signal reconstruction;time-frequency analysis;determining listening attention;EEG signals;sound sources;ERP;time-frequency analysis;stimulus reconstruction approaches;regression model;cocktail party problem;Electroencephalography;Brain modeling;Finite impulse response filters;Computational modeling;Real-time systems;Mathematical model;Europe;attention;cocktail party;linear regression (LR);finite impulse response (FIR);multivariable model;sound;EEG},\n  doi = {10.1109/EUSIPCO.2016.7760204},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n We still have very little knowledge about how our brains decouple different sound sources, which is known as solving the cocktail party problem. Several approaches; including ERP, time-frequency analysis and, more recently, regression and stimulus reconstruction approaches; have been suggested for solving this problem. In this work, we study the problem of correlating of EEG signals to different sets of sound sources with the goal of identifying the single source to which the listener is attending. Here, we propose a method for finding the number of parameters needed in a regression model to avoid overlearning, which is necessary for determining the attended sound source with high confidence in order to solve the cocktail party problem.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 40-Hz ASSR depth of anaesthesia index.\n \n \n \n \n\n\n \n Haghighi, S. J.; and Hatzinakos, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 36-40, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"40-HzPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760205,\n  author = {S. J. Haghighi and D. Hatzinakos},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {40-Hz ASSR depth of anaesthesia index},\n  year = {2016},\n  pages = {36-40},\n  abstract = {A novel method for defining an index based on multi-level clustering of 40-Hz auditory steady state response is presented in this paper. The index is a measure of depth of anaesthesia which can help monitoring depth of anaesthesia more closely and accurately. Multi-level expectation maximization (EM) is used for clustering the recorded 40-Hz auditory steady state response signals recorded from human subjects. The clustering information is used to define the depth of anaesthesia index. Rather than extracting the maximum amplitude and frequency at each cycle as clustering features, principal components analysis (PCA) is used for analyzing all samples of the cycles and projecting data into a lower dimension space. Both dimension reduction and clustering schemes are unsupervised methods, hence the algorithm does not need initial data labeling or training phase.},\n  keywords = {auditory evoked potentials;biomedical engineering;expectation-maximisation algorithm;pattern clustering;ASSR depth of anaesthesia index;multilevel clustering;auditory steady state response;multilevel expectation maximization;principal components analysis;frequency 40 Hz;Anesthesia;Indexes;Feature extraction;Electroencephalography;Principal component analysis;Clustering algorithms;Monitoring},\n  doi = {10.1109/EUSIPCO.2016.7760205},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252431.pdf},\n}\n\n
\n
\n\n\n
\n A novel method for defining an index based on multi-level clustering of 40-Hz auditory steady state response is presented in this paper. The index is a measure of depth of anaesthesia which can help monitoring depth of anaesthesia more closely and accurately. Multi-level expectation maximization (EM) is used for clustering the recorded 40-Hz auditory steady state response signals recorded from human subjects. The clustering information is used to define the depth of anaesthesia index. Rather than extracting the maximum amplitude and frequency at each cycle as clustering features, principal components analysis (PCA) is used for analyzing all samples of the cycles and projecting data into a lower dimension space. Both dimension reduction and clustering schemes are unsupervised methods, hence the algorithm does not need initial data labeling or training phase.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Tabu search vs. bio-inspired algorithms for antenna selection in spatially correlated massive MIMO uplink channels.\n \n \n \n\n\n \n Abdullah, Z.; Tsimenidis, C. C.; and Johnston, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 41-45, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760206,\n  author = {Z. Abdullah and C. C. Tsimenidis and M. Johnston},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Tabu search vs. bio-inspired algorithms for antenna selection in spatially correlated massive MIMO uplink channels},\n  year = {2016},\n  pages = {41-45},\n  abstract = {Massive Multiple Input Multiple Output (MIMO) systems can significantly improve the system performance and capacity by using a large number of antenna elements at the base station (BS). To reduce the system complexity and hardware cost, low complexity antenna selection techniques can be used to choose the best antenna subset while keeping the system performance at a certain required level. In this paper, Tabu Search (TS) and three bio-inspired optimization algorithms were used for antenna selection in Massive MIMO systems. The three bio-inspired algorithms were: Particle Swarm Optimization (PSO), Genetic Algorithm (GA), and Artificial Bee Colony (ABC). Simulations showed promising results for the TS by achieving higher capacity with GA than PSO and ABC, and much shorter CPU time than any of the bio-inspired techniques.},\n  keywords = {antenna arrays;genetic algorithms;MIMO communication;particle swarm optimisation;search problems;wireless channels;tabu search;spatially correlated massive MIMO uplink channel;massive multiple input multiple output system;base station;BS;system complexity reduction;hardware cost reduction;low complexity antenna selection technique;TS;bioinspired optimization algorithm;particle swarm optimization;PSO;genetic algorithm;GA;artificial bee colony;ABC;Antennas;Biological cells;MIMO;Signal processing algorithms;Genetic algorithms;Complexity theory;Optimization;Massive MIMO;Antenna selection;Bio-inspired algorithms;Particle Swarm Optimization (PSO);Genetic algorithm (GA);Artificial Bee colony (ABC);Tabu search (TS)},\n  doi = {10.1109/EUSIPCO.2016.7760206},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n Massive Multiple Input Multiple Output (MIMO) systems can significantly improve the system performance and capacity by using a large number of antenna elements at the base station (BS). To reduce the system complexity and hardware cost, low complexity antenna selection techniques can be used to choose the best antenna subset while keeping the system performance at a certain required level. In this paper, Tabu Search (TS) and three bio-inspired optimization algorithms were used for antenna selection in Massive MIMO systems. The three bio-inspired algorithms were: Particle Swarm Optimization (PSO), Genetic Algorithm (GA), and Artificial Bee Colony (ABC). Simulations showed promising results for the TS by achieving higher capacity with GA than PSO and ABC, and much shorter CPU time than any of the bio-inspired techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Learning directed-acyclic-graphs from large-scale double-knockout experiments.\n \n \n \n \n\n\n \n Nikolay, F.; and Pesavento, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 46-50, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"LearningPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760207,\n  author = {F. Nikolay and M. Pesavento},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Learning directed-acyclic-graphs from large-scale double-knockout experiments},\n  year = {2016},\n  pages = {46-50},\n  abstract = {In this paper we consider the problem of learning the genetic-interaction-map, i.e., the topology of a directed acyclic graph (DAG) of genetic interactions from noisy double knockout (DK) data. Based on a set of well established biological interaction models we detect and classify the interactions between genes. Furthermore, we propose a novel linear integer optimization framework called Genetic-Interactions-Detector (GENIE) to identify the complex biological dependencies between genes and to compute the DAG topology that matches the DK measurements best where we make use of the well known branch-and-bound (BB) principle. Finally, we show via numeric simulations that the GENIE framework clearly outperforms the conventional techniques.},\n  keywords = {Big Data;biology computing;directed graphs;genetics;learning (artificial intelligence);tree searching;directed acyclic graph;DAG;double-knockout data;DK data;genetic-interaction-map learning;genetic interaction detection;genetic interaction classification;biological interaction model;genetic-interactions-detector;GENIE;branch-and-bound principle;BB principle;Big Data;Genetics;Biological system modeling;Optimization;Topology;Europe;Signal processing;Computational modeling;Genetic interactions analysis;large scale gene networks;discrete optimization;big data},\n  doi = {10.1109/EUSIPCO.2016.7760207},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256263.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we consider the problem of learning the genetic-interaction-map, i.e., the topology of a directed acyclic graph (DAG) of genetic interactions from noisy double knockout (DK) data. Based on a set of well established biological interaction models we detect and classify the interactions between genes. Furthermore, we propose a novel linear integer optimization framework called Genetic-Interactions-Detector (GENIE) to identify the complex biological dependencies between genes and to compute the DAG topology that matches the DK measurements best where we make use of the well known branch-and-bound (BB) principle. Finally, we show via numeric simulations that the GENIE framework clearly outperforms the conventional techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compressive data aggregation on mobile wireless sensor networks for sensing in bike races.\n \n \n \n \n\n\n \n Du, W.; Gorce, J.; Risset, T.; Lauzier, M.; and Fraboulet, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 51-55, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CompressivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760208,\n  author = {W. Du and J. Gorce and T. Risset and M. Lauzier and A. Fraboulet},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Compressive data aggregation on mobile wireless sensor networks for sensing in bike races},\n  year = {2016},\n  pages = {51-55},\n  abstract = {This paper presents an efficient approach for collecting data in mobile wireless sensor networks which is specifically designed to gather real-time information of bikers in a bike race. The approach employs the recent HIKOB sensors for tracking the GPS position of each bike and the problem herein addressed is to transmit this information to a collector for visualization or other processing. Our approach exploits the inherent correlation between biker motions and aggregates GPS data at sensors using compressive sensing (CS) techniques. We enforce, instead of the standard signal sparsity, a spatial sparsity prior on biker motion because of the grouping behavior (peloton) in bike races. The spatial sparsity is modeled by a graphical model and the CS-based data aggregation problem is solved using linear programming. Our approach, integrated in a multi-round opportunistic routing protocol, is validated on data generated by a bike race simulator using trajectories of motorbikes obtained from a real race, the Paris-Tours 2013.},\n  keywords = {compressed sensing;Global Positioning System;linear programming;mobile radio;routing protocols;wireless sensor networks;compressive data aggregation;mobile wireless sensor networks;bike races;real-time information;HIKOB sensors;GPS position;biker motions;compressive sensing technique;standard signal sparsity;graphical model;CS-based data aggregation problem;linear programming;multiround opportunistic routing protocol;bike race simulator;motorbikes;Paris-Tours 2013;Sensors;Wireless sensor networks;Packet loss;Global Positioning System;Data collection;Routing},\n  doi = {10.1109/EUSIPCO.2016.7760208},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570249051.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents an efficient approach for collecting data in mobile wireless sensor networks which is specifically designed to gather real-time information of bikers in a bike race. The approach employs the recent HIKOB sensors for tracking the GPS position of each bike and the problem herein addressed is to transmit this information to a collector for visualization or other processing. Our approach exploits the inherent correlation between biker motions and aggregates GPS data at sensors using compressive sensing (CS) techniques. We enforce, instead of the standard signal sparsity, a spatial sparsity prior on biker motion because of the grouping behavior (peloton) in bike races. The spatial sparsity is modeled by a graphical model and the CS-based data aggregation problem is solved using linear programming. Our approach, integrated in a multi-round opportunistic routing protocol, is validated on data generated by a bike race simulator using trajectories of motorbikes obtained from a real race, the Paris-Tours 2013.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Making sense of randomness: Fast signal recovery from compressive samples.\n \n \n \n \n\n\n \n Abrol, V.; Sharma, P.; and Sao, A. K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 56-60, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MakingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760209,\n  author = {V. Abrol and P. Sharma and A. K. Sao},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Making sense of randomness: Fast signal recovery from compressive samples},\n  year = {2016},\n  pages = {56-60},\n  abstract = {In compressed sensing (CS) framework, a signal is sampled below Nyquist rate, and the acquired samples are generally random in nature. Thus, for efficient estimation of the actual signal, the sensing matrix must preserve the relative distances among the underlying sparse vectors. Provided this condition is fulfilled, we show that CS samples will also preserve the envelope of the actual signal. Exploiting this envelope preserving property of CS samples, we propose a new fast method which is able to extract prototype signals from compressive samples for efficient sparse representation and recovery of signals. These prototype signals are orthogonal intrinsic mode functions (IMFs) extracted from CS samples using empirical mode decomposition (EMD), which is one of the popular methods to capture the envelope of a signal. The extracted IMFs are used to seed the dictionary without even comprehending the original signal or the sensing matrix. Moreover, one can update the dictionary on-line as new CS samples are available. In particularly, to recover first L signals (ϵ Rn) at the decoder, one can seed the dictionary in just O(nL log n) operations, that is far less as compared to existing approaches. The efficiency of the proposed approach is demonstrated experimentally for recovery of speech signals.},\n  keywords = {compressed sensing;signal reconstruction;signal representation;speech processing;vectors;speech signal recovery;EMD;empirical mode decomposition;orthogonal IMF;orthogonal intrinsic mode functions;sparse representation;signal envelope preservation;sparse vectors;sensing matrix;signal estimation;Nyquist rate;CS framework;compressed sensing;fast signal recovery;Dictionaries;Speech;Sensors;Sparse matrices;Image coding;Prototypes;Buildings;Compressed sensing;dictionary learning;empirical mode decomposition;speech processing},\n  doi = {10.1109/EUSIPCO.2016.7760209},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250527.pdf},\n}\n\n
\n
\n\n\n
\n In compressed sensing (CS) framework, a signal is sampled below Nyquist rate, and the acquired samples are generally random in nature. Thus, for efficient estimation of the actual signal, the sensing matrix must preserve the relative distances among the underlying sparse vectors. Provided this condition is fulfilled, we show that CS samples will also preserve the envelope of the actual signal. Exploiting this envelope preserving property of CS samples, we propose a new fast method which is able to extract prototype signals from compressive samples for efficient sparse representation and recovery of signals. These prototype signals are orthogonal intrinsic mode functions (IMFs) extracted from CS samples using empirical mode decomposition (EMD), which is one of the popular methods to capture the envelope of a signal. The extracted IMFs are used to seed the dictionary without even comprehending the original signal or the sensing matrix. Moreover, one can update the dictionary on-line as new CS samples are available. In particularly, to recover first L signals (ϵ Rn) at the decoder, one can seed the dictionary in just O(nL log n) operations, that is far less as compared to existing approaches. The efficiency of the proposed approach is demonstrated experimentally for recovery of speech signals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust non-negative least squares using sparsity.\n \n \n \n \n\n\n \n Elvander, F.; Adalbjörnsson, S. I.; and Jakobsson, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 61-65, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760210,\n  author = {F. Elvander and S. I. Adalbjörnsson and A. Jakobsson},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust non-negative least squares using sparsity},\n  year = {2016},\n  pages = {61-65},\n  abstract = {Sparse, non-negative signals occur in many applications. To recover such signals, estimation posed as non-negative least squares problems have proven to be fruitful. Efficient algorithms with high accuracy have been proposed, but many of them assume either perfect knowledge of the dictionary generating the signal, or attempts to explain deviations from this dictionary by attributing them to components that for some reason is missing from the dictionary. In this work, we propose a robust non-negative least squares algorithm that allows the generating dictionary to differ from the assumed dictionary, introducing uncertainty in the setup. The proposed algorithm enables an improved modeling of the measurements, and may be efficiently implemented using a proposed ADMM implementation. Numerical examples illustrate the improved performance as compared to the standard non-negative LASSO estimator.},\n  keywords = {compressed sensing;least squares approximations;robust nonnegative least square problem;sparse nonnegative signal;generating dictionary;ADMM implementation;Dictionaries;Signal to noise ratio;Robustness;Optimization;Europe;Estimation;robust non-negative least squares;ADMM},\n  doi = {10.1109/EUSIPCO.2016.7760210},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250741.pdf},\n}\n\n
\n
\n\n\n
\n Sparse, non-negative signals occur in many applications. To recover such signals, estimation posed as non-negative least squares problems have proven to be fruitful. Efficient algorithms with high accuracy have been proposed, but many of them assume either perfect knowledge of the dictionary generating the signal, or attempts to explain deviations from this dictionary by attributing them to components that for some reason is missing from the dictionary. In this work, we propose a robust non-negative least squares algorithm that allows the generating dictionary to differ from the assumed dictionary, introducing uncertainty in the setup. The proposed algorithm enables an improved modeling of the measurements, and may be efficiently implemented using a proposed ADMM implementation. Numerical examples illustrate the improved performance as compared to the standard non-negative LASSO estimator.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multidimensional sparse recovery for MIMO channel parameter estimation.\n \n \n \n \n\n\n \n Steffens, C.; Yang, Y.; and Pesavento, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 66-70, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MultidimensionalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760211,\n  author = {C. Steffens and Y. Yang and M. Pesavento},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Multidimensional sparse recovery for MIMO channel parameter estimation},\n  year = {2016},\n  pages = {66-70},\n  abstract = {Multipath propagation is a common phenomenon in wireless communication. Knowledge of propagation path parameters such as complex channel gain, propagation delay or angle-of-arrival provides valuable information on the user position and facilitates channel response estimation. A major challenge in channel parameter estimation lies in its multidimensional nature, which leads to large-scale estimation problems which are difficult to solve. Current approaches of sparse recovery for multidimensional parameter estimation aim at simultaneously estimating all channel parameters by solving one large-scale estimation problem. In contrast to that we propose a sparse recovery method which relies on decomposing the multidimensional problem into successive one-dimensional parameter estimation problems, which are much easier to solve and less sensitive to off-grid effects, while providing proper parameter pairing. Our proposed decomposition relies on convex optimization in terms of nuclear norm minimization and we present an efficient implementation in terms of the recently developed STELA algorithm.},\n  keywords = {channel estimation;convex programming;direction-of-arrival estimation;MIMO communication;minimisation;multipath channels;radiowave propagation;wireless channels;multidimensional sparse recovery;MIMO channel parameter estimation;multipath propagation;wireless communication;propagation path parameters;complex channel gain;propagation delay;angle-of-arrival;channel response estimation;multidimensional parameter estimation;off-grid effects;parameter pairing;convex optimization;nuclear norm minimization;STELA algorithm;Estimation;Channel estimation;Parameter estimation;Sparse matrices;Linear antenna arrays;Propagation delay;MIMO Channel Parameters;Multidimensional Parameter Estimation;Sparse Recovery;Nuclear Norm;STELA},\n  doi = {10.1109/EUSIPCO.2016.7760211},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252132.pdf},\n}\n\n
\n
\n\n\n
\n Multipath propagation is a common phenomenon in wireless communication. Knowledge of propagation path parameters such as complex channel gain, propagation delay or angle-of-arrival provides valuable information on the user position and facilitates channel response estimation. A major challenge in channel parameter estimation lies in its multidimensional nature, which leads to large-scale estimation problems which are difficult to solve. Current approaches of sparse recovery for multidimensional parameter estimation aim at simultaneously estimating all channel parameters by solving one large-scale estimation problem. In contrast to that we propose a sparse recovery method which relies on decomposing the multidimensional problem into successive one-dimensional parameter estimation problems, which are much easier to solve and less sensitive to off-grid effects, while providing proper parameter pairing. Our proposed decomposition relies on convex optimization in terms of nuclear norm minimization and we present an efficient implementation in terms of the recently developed STELA algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Enhanced iterative hard thresholding for the estimation of discrete-valued sparse signals.\n \n \n \n \n\n\n \n Sparrer, S.; and Fischer, R. F. H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 71-75, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EnhancedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760212,\n  author = {S. Sparrer and R. F. H. Fischer},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Enhanced iterative hard thresholding for the estimation of discrete-valued sparse signals},\n  year = {2016},\n  pages = {71-75},\n  abstract = {In classical Compressed Sensing, real-valued sparse vectors have to be estimated from an underdetermined system of linear equations. However, in many applications such as sensor networks, the elements of the vector to be estimated are discrete-valued or from a finite set. Hence, specialized algorithms which perform the reconstruction with respect to this additional knowledge are required. Starting from the well-known iterative hard thresholding algorithm, a new algorithm is developed. To this end, knowledge from communications engineering is transferred to Compressed Sensing, resulting in a powerful though low-complexity algorithm. Via numerical results the benefit of the proposed algorithm is covered.},\n  keywords = {compressed sensing;iterative methods;set theory;signal reconstruction;discrete-valued sparse signal estimation;compressed sensing;real-valued sparse vector;linear equation;finite set;signal reconstruction;iterative hard thresholding algorithm;communication engineering},\n  doi = {10.1109/EUSIPCO.2016.7760212},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252137.pdf},\n}\n\n
\n
\n\n\n
\n In classical Compressed Sensing, real-valued sparse vectors have to be estimated from an underdetermined system of linear equations. However, in many applications such as sensor networks, the elements of the vector to be estimated are discrete-valued or from a finite set. Hence, specialized algorithms which perform the reconstruction with respect to this additional knowledge are required. Starting from the well-known iterative hard thresholding algorithm, a new algorithm is developed. To this end, knowledge from communications engineering is transferred to Compressed Sensing, resulting in a powerful though low-complexity algorithm. Via numerical results the benefit of the proposed algorithm is covered.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic estimation of the noise level function for adaptive blind denoising.\n \n \n \n \n\n\n \n Sutour, C.; Aujol, J.; and Deledalle, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 76-80, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760213,\n  author = {C. Sutour and J. Aujol and C. Deledalle},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic estimation of the noise level function for adaptive blind denoising},\n  year = {2016},\n  pages = {76-80},\n  abstract = {Image denoising is a fundamental problem in image processing and many powerful algorithms have been developed. However, they often rely on the knowledge of the noise distribution and its parameters. We propose a fully blind denoising method that first estimates the noise level function then uses this estimation for automatic denoising. First we perform the nonparametric detection of homogeneous image regions in order to compute a scatterplot of the noise statistics, then we estimate the noise level function with the least absolute deviation estimator. The noise level function parameters are then directly re-injected into an adaptive denoising algorithm based on the non-local means with no prior model fitting. Results show the performance of the noise estimation and denoising methods, and we provide a robust blind denoising tool.},\n  keywords = {estimation theory;image denoising;adaptive blind denoising;automatic estimation;image denoising;noise distribution;automatic denoising;nonparametric detection;homogeneous image regions;noise statistics;least absolute deviation estimator;noise level function parameters;nonlocal means;noise estimation;robust blind denoising tool;Noise reduction;Estimation;Noise level;Signal processing algorithms;Noise measurement;Correlation;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760213},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250456.pdf},\n}\n\n
\n
\n\n\n
\n Image denoising is a fundamental problem in image processing and many powerful algorithms have been developed. However, they often rely on the knowledge of the noise distribution and its parameters. We propose a fully blind denoising method that first estimates the noise level function then uses this estimation for automatic denoising. First we perform the nonparametric detection of homogeneous image regions in order to compute a scatterplot of the noise statistics, then we estimate the noise level function with the least absolute deviation estimator. The noise level function parameters are then directly re-injected into an adaptive denoising algorithm based on the non-local means with no prior model fitting. Results show the performance of the noise estimation and denoising methods, and we provide a robust blind denoising tool.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed parallel image signal extrapolation framework using Message Passing Interface.\n \n \n \n \n\n\n \n Seiler, J.; and Kaup, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 81-85, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DistributedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760214,\n  author = {J. Seiler and A. Kaup},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Distributed parallel image signal extrapolation framework using Message Passing Interface},\n  year = {2016},\n  pages = {81-85},\n  abstract = {This paper introduces a framework for distributed parallel image signal extrapolation. Since high-quality image signal processing often comes along with a high computational complexity, a parallel execution is desirable. The proposed framework allows for the application of existing image signal extrapolation algorithms without the need to modify them for a parallel processing. The unaltered application of existing algorithms is achieved by dividing input images into overlapping tiles which are distributed to compute nodes via Message Passing Interface. In order to keep the computational overhead low, a novel image tiling algorithm is proposed. Using this algorithm, a nearly optimum tiling is possible at a very small processing time. For showing the efficacy of the framework, it is used for parallelizing a high-complexity extrapolation algorithm. Simulation results show that the proposed framework has no negative impact on extrapolation quality while at the same time offering good scaling behavior on compute clusters.},\n  keywords = {computational complexity;extrapolation;image processing;message passing;distributed parallel image signal extrapolation framework;message passing interface;high-quality image signal processing;computational complexity;parallel execution;overlapping tiles;computational overhead;image tiling algorithm;high-complexity extrapolation algorithm;extrapolation quality;Extrapolation;Signal processing algorithms;Clustering algorithms;Signal processing;Image reconstruction;Message passing;Central Processing Unit;Parallelization;Tiling;Message Passing Interface;Image Signal Extrapolation},\n  doi = {10.1109/EUSIPCO.2016.7760214},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250574.pdf},\n}\n\n
\n
\n\n\n
\n This paper introduces a framework for distributed parallel image signal extrapolation. Since high-quality image signal processing often comes along with a high computational complexity, a parallel execution is desirable. The proposed framework allows for the application of existing image signal extrapolation algorithms without the need to modify them for a parallel processing. The unaltered application of existing algorithms is achieved by dividing input images into overlapping tiles which are distributed to compute nodes via Message Passing Interface. In order to keep the computational overhead low, a novel image tiling algorithm is proposed. Using this algorithm, a nearly optimum tiling is possible at a very small processing time. For showing the efficacy of the framework, it is used for parallelizing a high-complexity extrapolation algorithm. Simulation results show that the proposed framework has no negative impact on extrapolation quality while at the same time offering good scaling behavior on compute clusters.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Restoration of intensity and depth images constructed using sparse single-photon data.\n \n \n \n \n\n\n \n Halimi, A.; Altmann, Y.; McCarthy, A.; Ren, X.; Tobin, R.; Buller, G. S.; and McLaughlin, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 86-90, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RestorationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760215,\n  author = {A. Halimi and Y. Altmann and A. McCarthy and X. Ren and R. Tobin and G. S. Buller and S. McLaughlin},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Restoration of intensity and depth images constructed using sparse single-photon data},\n  year = {2016},\n  pages = {86-90},\n  abstract = {This paper presents a new algorithm for the joint restoration of depth and intensity images constructed from the time-correlated single-photon counting (TCSPC) measurement in the limit of very few photon counts [1]. Under some justified approximations, the restoration problem (regularized likelihood) reduces to a convex formulation with respect to the parameters of interest. The first advantage of this formulation is that it only processes the corrupted depth and intensity images obtained from preliminary estimation, without the need for the use of full TCSPC waveforms. The second advantage is its flexibility in being able to use different convex regularization terms such as: total variation (TV); and sparsity of the discrete cosine transform (DCT) coefficients. The estimation problems are efficiently solved using the alternating direction method of multipliers (ADMM) that presents good convergence properties and thus a reduced computational cost. Results on single photon depth data from field trials show the benefit of the proposed strategy that improves the quality of the estimated depth and intensity images.},\n  keywords = {convex programming;image restoration;optical information processing;photon counting;joint depth and intensity image restoration;field trial;reduced computational cost;ADMM;alternating direction method of multipliers;convex formulation;TCSPC measurement;time-correlated single-photon counting measurement;sparse single-photon data;Signal processing algorithms;Photonics;Cost function;Image restoration;Estimation;Discrete cosine transforms;Laser radar;Lidar waveform;Poisson statistics;image restoration;ADMM;total variation regularization},\n  doi = {10.1109/EUSIPCO.2016.7760215},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251151.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a new algorithm for the joint restoration of depth and intensity images constructed from the time-correlated single-photon counting (TCSPC) measurement in the limit of very few photon counts [1]. Under some justified approximations, the restoration problem (regularized likelihood) reduces to a convex formulation with respect to the parameters of interest. The first advantage of this formulation is that it only processes the corrupted depth and intensity images obtained from preliminary estimation, without the need for the use of full TCSPC waveforms. The second advantage is its flexibility in being able to use different convex regularization terms such as: total variation (TV); and sparsity of the discrete cosine transform (DCT) coefficients. The estimation problems are efficiently solved using the alternating direction method of multipliers (ADMM) that presents good convergence properties and thus a reduced computational cost. Results on single photon depth data from field trials show the benefit of the proposed strategy that improves the quality of the estimated depth and intensity images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sub-pixel shift estimation of image based on the least squares approximation in phase region.\n \n \n \n \n\n\n \n Fujimoto, R.; Fujisawa, T.; and Ikehara, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 91-95, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Sub-pixelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760216,\n  author = {R. Fujimoto and T. Fujisawa and M. Ikehara},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Sub-pixel shift estimation of image based on the least squares approximation in phase region},\n  year = {2016},\n  pages = {91-95},\n  abstract = {This paper proposes a novel method to estimate non-integer shift of images based on least squares approximation in the phase region. Conventional methods based on Phase Only Correlation (POC) take correlation between a image and its shifted image, and then estimate the non-integer shift by fitting the model equation. The problem with using POC is that the true peak of the POC function may not match the estimated peak of the fitted model equation. This causes error in non-integer shift estimation. By calculating directly in the phase region, the proposed method allows the estimation of decimal shift through least squares approximation. Also by utilizing the characteristics of the natural image, the proposed method limits adoption range for least squares approximation. By these improvements, the proposed method improves the estimation and achieves high accuracy.},\n  keywords = {image resolution;least squares approximations;image resolution;decimal shift estimation;noninteger shift estimation;fitted model equation;model equation;noninteger shift;POC function;phase only correlation function;image sub-pixel shift estimation;phase region;least squares approximation;Mathematical model;Estimation;Least squares approximation;Frequency-domain analysis;Signal processing;Discrete Fourier transforms;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760216},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251532.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a novel method to estimate non-integer shift of images based on least squares approximation in the phase region. Conventional methods based on Phase Only Correlation (POC) take correlation between a image and its shifted image, and then estimate the non-integer shift by fitting the model equation. The problem with using POC is that the true peak of the POC function may not match the estimated peak of the fitted model equation. This causes error in non-integer shift estimation. By calculating directly in the phase region, the proposed method allows the estimation of decimal shift through least squares approximation. Also by utilizing the characteristics of the natural image, the proposed method limits adoption range for least squares approximation. By these improvements, the proposed method improves the estimation and achieves high accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Graph-regularized multi-class support vector machines for face and action recognition.\n \n \n \n \n\n\n \n Iosifidis, A.; and Gabbouj, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 96-100, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Graph-regularizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760217,\n  author = {A. Iosifidis and M. Gabbouj},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Graph-regularized multi-class support vector machines for face and action recognition},\n  year = {2016},\n  pages = {96-100},\n  abstract = {In this paper, we formulate a variant of the Support Vector Machine classifier that exploits graph-based discrimination criteria within a multi-class optimization process. We employ two kNN graphs in order to describe intra-class and between-class data relationships. These graph structures are combined in order to form a regularizer which is used in order to regularize the multi-class SVM optimization problem. The derived multiclass classifier is compared with the standard SVM classifier and SVM formulations exploiting geometric class information on six publicly available databases designed for human action recognition in the wild and facial image classification problems, where its effectiveness is shown.},\n  keywords = {face recognition;graph theory;image classification;optimisation;support vector machines;facial image classification problem;human action recognition;multiclass SVM optimization problem;kNN graph;graph-based discrimination criteria;face recognition;graph-regularized multiclass support vector machine classifier;Support vector machines;Optimization;Training data;Kernel;Standards;Signal processing;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760217},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251675.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we formulate a variant of the Support Vector Machine classifier that exploits graph-based discrimination criteria within a multi-class optimization process. We employ two kNN graphs in order to describe intra-class and between-class data relationships. These graph structures are combined in order to form a regularizer which is used in order to regularize the multi-class SVM optimization problem. The derived multiclass classifier is compared with the standard SVM classifier and SVM formulations exploiting geometric class information on six publicly available databases designed for human action recognition in the wild and facial image classification problems, where its effectiveness is shown.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analyzing notch patterns of head related transfer functions in CIPIC and SYMARE databases.\n \n \n \n \n\n\n \n Shahnawaz, M.; Bianchi, L.; Sarti, A.; and Tubaro, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 101-105, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AnalyzingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760218,\n  author = {M. Shahnawaz and L. Bianchi and A. Sarti and S. Tubaro},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Analyzing notch patterns of head related transfer functions in CIPIC and SYMARE databases},\n  year = {2016},\n  pages = {101-105},\n  abstract = {The sensation of elevation in binaural audio is known to be strongly correlated to spectral peaks and notches in HRTFs, introduced by pinna reflections. In this work we provide an analysis methodology that helps us to explore the relationship between notch frequencies and elevation angles in the median plane. In particular, we extract the portion of the HRTF due to the presence of the pinna and we use it to extract the notch frequencies for all the subjects and for all the considered directions. The extracted notch frequencies are then clustered using the K-means algorithm to reveal the relationship between notch frequencies and elevation angles. We present the results of the proposed analysis methodology for all the subjects in the CIPIC and SYMARE HRTFs databases.},\n  keywords = {audio signal processing;feature extraction;hearing;transfer functions;K-means algorithm;notch frequency extraction;median plane;pinna reflection;binaural audio;SYMARE database;CIPIC database;HRTF notch pattern analysis;head related transfer function;Databases;Ear;Transfer functions;Clustering algorithms;Signal processing algorithms;Acoustic measurements;Torso;Binaural audio;Elevation perception;Head Related Transfer Function (HRTF);k-means},\n  doi = {10.1109/EUSIPCO.2016.7760218},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251577.pdf},\n}\n\n
\n
\n\n\n
\n The sensation of elevation in binaural audio is known to be strongly correlated to spectral peaks and notches in HRTFs, introduced by pinna reflections. In this work we provide an analysis methodology that helps us to explore the relationship between notch frequencies and elevation angles in the median plane. In particular, we extract the portion of the HRTF due to the presence of the pinna and we use it to extract the notch frequencies for all the subjects and for all the considered directions. The extracted notch frequencies are then clustered using the K-means algorithm to reveal the relationship between notch frequencies and elevation angles. We present the results of the proposed analysis methodology for all the subjects in the CIPIC and SYMARE HRTFs databases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multiple description vector quantizer design based on redundant representation of central code.\n \n \n \n \n\n\n \n Ito, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 106-109, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MultiplePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760219,\n  author = {A. Ito},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Multiple description vector quantizer design based on redundant representation of central code},\n  year = {2016},\n  pages = {106-109},\n  abstract = {A design method of a multiple description vector quantizer (VQ) is proposed. VQ is widely used for data compression, transmission and other processing. Here, we assume transmission channels with data erasure such as a packet-based network. Multiple description coding is a coding method used to achieve “graceful degradation” when transmitting signals through lossy channels. The proposed method is inspired by the vector quantizer design of Poggi et al., which combines VQ design based on the self-organizing map (SOM) and the multiple description scalar quantizer (MDSQ). The method also uses the SOM-based VQ; the difference is that the proposed method combines a bit-error-tolerant VQ designed by SOM and a novel scheme for cell arrangement of SOM based on Redundant Representation of Central Code (RRCC). The method is not only easy to design for any bit rate but is also more robust against data erasure compared with the conventional VQ.},\n  keywords = {codes;self-organising feature maps;telecommunication computing;vector quantisation;RRCC;redundant representation of central code;cell arrangement;bit-error-tolerant VQ;MDSQ;multiple description scalar quantizer;SOM;self-organizing map;Poggi;lossy channels;packet-based network;data erasure;transmission channels;data processing;data transmission;data compression;multiple description vector quantizer design;Lattices;Decoding;Training;Encoding;Receivers;Europe;Signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760219},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251884.pdf},\n}\n\n
\n
\n\n\n
\n A design method of a multiple description vector quantizer (VQ) is proposed. VQ is widely used for data compression, transmission and other processing. Here, we assume transmission channels with data erasure such as a packet-based network. Multiple description coding is a coding method used to achieve “graceful degradation” when transmitting signals through lossy channels. The proposed method is inspired by the vector quantizer design of Poggi et al., which combines VQ design based on the self-organizing map (SOM) and the multiple description scalar quantizer (MDSQ). The method also uses the SOM-based VQ; the difference is that the proposed method combines a bit-error-tolerant VQ designed by SOM and a novel scheme for cell arrangement of SOM based on Redundant Representation of Central Code (RRCC). The method is not only easy to design for any bit rate but is also more robust against data erasure compared with the conventional VQ.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Two multimodal approaches for single microphone source separation.\n \n \n \n \n\n\n \n Sedighin, F.; Babaie-Zadeh, M.; Rivet, B.; and Jutten, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 110-114, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"TwoPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760220,\n  author = {F. Sedighin and M. Babaie-Zadeh and B. Rivet and C. Jutten},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Two multimodal approaches for single microphone source separation},\n  year = {2016},\n  pages = {110-114},\n  abstract = {In this paper, the problem of single microphone source separation via Nonnegative Matrix Factorization (NMF) by exploiting video information is addressed. Respective audio and video modalities coming from a single human speech usually have similar time changes. It means that changes in one of them usually corresponds to changes in the other one. So it is expected that activation coefficient matrices of their NMF decomposition are similar. Based on this similarity, in this paper the activation coefficient matrix of the video modality is used as an initialization for audio source separation via NMF. In addition, the mentioned similarity is used for post-processing and for clustering the rows of the activation coefficient matrix which were resulted from randomly initialized NMF. Simulation results confirm the effectiveness of the proposed multimodal approaches in single microphone source separation.},\n  keywords = {audio signal processing;matrix decomposition;microphones;source separation;randomly-initialized NMF;audio source separation;activation coefficient matrix;NMF decomposition;activation coefficient matrices;human speech;video modality;audio modality;video information;NMF;nonnegative matrix factorization;single-microphone source separation;multimodal approach;Source separation;Microphones;Matrix decomposition;Clustering algorithms;Speech;Signal processing algorithms;Lips;Single microphone source separation;Nonnegative matrix factorization;Multimodal source separation},\n  doi = {10.1109/EUSIPCO.2016.7760220},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251892.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, the problem of single microphone source separation via Nonnegative Matrix Factorization (NMF) by exploiting video information is addressed. Respective audio and video modalities coming from a single human speech usually have similar time changes. It means that changes in one of them usually corresponds to changes in the other one. So it is expected that activation coefficient matrices of their NMF decomposition are similar. Based on this similarity, in this paper the activation coefficient matrix of the video modality is used as an initialization for audio source separation via NMF. In addition, the mentioned similarity is used for post-processing and for clustering the rows of the activation coefficient matrix which were resulted from randomly initialized NMF. Simulation results confirm the effectiveness of the proposed multimodal approaches in single microphone source separation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A comparative study of representations for folk dances recognition in video.\n \n \n \n \n\n\n \n Fotiadou, E.; Nikolaidis, N.; and Tefas, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 115-119, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760221,\n  author = {E. Fotiadou and N. Nikolaidis and A. Tefas},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A comparative study of representations for folk dances recognition in video},\n  year = {2016},\n  pages = {115-119},\n  abstract = {Dance traditions constitute a significant aspect of cultural heritage around the world. The organization, semantic analysis, and retrieval of dance-related multimedia content (i.e., music, video) in databases is, therefore, crucial to their preservation. In this paper we explore the problem of folk dances recognition from video recordings, focusing on Greek folk dances, using different representations for the data. To this end we have employed the well-known Bag of Words model, in combination with dense trajectories, as well as with streaklines descriptors. Furthermore, we have adopted a representation based on Linear Dynamic Systems, including a novel variant that uses dense trajectories descriptors instead of pixel intensities. The performance of the aforementioned representations is evaluated and compared, in a classification scenario involving 13 different dance classes.},\n  keywords = {image classification;image representation;video recording;video retrieval;video signal processing;folk dance recognition;semantic analysis;dance-related multimedia content retrieval;video recording;Greek folk dance;bag of words model;linear dynamic system;dense trajectory descriptor;classification scenario;Trajectory;Histograms;Training;Support vector machines;Europe;Kernel;dance recognition;Bag of Words model;Linear Dynamic Systems;dense trajectories;streaklines},\n  doi = {10.1109/EUSIPCO.2016.7760221},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256032.pdf},\n}\n\n
\n
\n\n\n
\n Dance traditions constitute a significant aspect of cultural heritage around the world. The organization, semantic analysis, and retrieval of dance-related multimedia content (i.e., music, video) in databases is, therefore, crucial to their preservation. In this paper we explore the problem of folk dances recognition from video recordings, focusing on Greek folk dances, using different representations for the data. To this end we have employed the well-known Bag of Words model, in combination with dense trajectories, as well as with streaklines descriptors. Furthermore, we have adopted a representation based on Linear Dynamic Systems, including a novel variant that uses dense trajectories descriptors instead of pixel intensities. The performance of the aforementioned representations is evaluated and compared, in a classification scenario involving 13 different dance classes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Managed video services over multi-domain software defined networks.\n \n \n \n \n\n\n \n Bagci, K. T.; and Tekalp, A. M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 120-124, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ManagedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760222,\n  author = {K. T. Bagci and A. M. Tekalp},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Managed video services over multi-domain software defined networks},\n  year = {2016},\n  pages = {120-124},\n  abstract = {We introduce a framework for provisioning end-to-end (E2E) managed video services over a multi-domain SDN, where different domains may be operated by different network providers. The proposed framework enables efficient dynamic management of network resources for network providers and ability to request the desired level of quality of experience (QoE) for end users. In the proposed fully-distributed E2E service framework, controllers of different domains negotiate with each other for the service level parameters of specific flows. The main contributions of this paper are a framework to provide E2E video services over multi-domain SDN, where functions that manage E2E services can collaborate with functions that manage network resources of respective domains, and a procedure for optimization of service parameters within each domain. The proposed framework and procedure have been verified over a newly developed large-scale multi-domain SDN emulation environment.},\n  keywords = {quality of experience;resource allocation;software defined networking;telecommunication services;video communication;QoE;quality of experience;network resources;network providers;E2E managed video services;end-to-end managed video services;multidomain SDN;software defined networks;Logic gates;Network topology;Quality of service;Switches;Process control;Monitoring},\n  doi = {10.1109/EUSIPCO.2016.7760222},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256216.pdf},\n}\n\n
\n
\n\n\n
\n We introduce a framework for provisioning end-to-end (E2E) managed video services over a multi-domain SDN, where different domains may be operated by different network providers. The proposed framework enables efficient dynamic management of network resources for network providers and ability to request the desired level of quality of experience (QoE) for end users. In the proposed fully-distributed E2E service framework, controllers of different domains negotiate with each other for the service level parameters of specific flows. The main contributions of this paper are a framework to provide E2E video services over multi-domain SDN, where functions that manage E2E services can collaborate with functions that manage network resources of respective domains, and a procedure for optimization of service parameters within each domain. The proposed framework and procedure have been verified over a newly developed large-scale multi-domain SDN emulation environment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Duplexer design and implementation for self-interference cancellation in full-duplex communications.\n \n \n \n \n\n\n \n Zhuang, H.; Li, J.; Geng, W.; Dai, X.; Zhang, Z.; and Vasilakos, A. V.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 125-129, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DuplexerPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760223,\n  author = {H. Zhuang and J. Li and W. Geng and X. Dai and Z. Zhang and A. V. Vasilakos},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Duplexer design and implementation for self-interference cancellation in full-duplex communications},\n  year = {2016},\n  pages = {125-129},\n  abstract = {The full-duplex (FD) based devices are capable of concurrently transmitting and receiving signals with a single frequency band. However, a severe self-interference (SI) due to the large difference between the power of the devices' own transmission and that of the signal of interest may be imposed on the FD based devices, thus significantly eroding the received signal-to-interference-plus-noise ratio (SINR). To implement the FD devices, the SI power must be sufficiently suppressed to provide a high-enough received SINR for satisfying the decoding requirement. In this paper, the design and implementation of the duplexer for facilitating SI cancellation in FD based devices are investigated, with a new type of duplexer (i.e. an improved directional coupler) designed and verified. It is shown that the SI suppression capability may be up to 36 dB by using the proposed design, which is much higher than that attainable in the commonly designed ferrite circulator.},\n  keywords = {circulators;decoding;interference suppression;multiplexing;duplexer design;self-interference cancellation;full-duplex communications;full-duplex based devices;signal-to-interference-plus-noise ratio;SINR;FD based devices;SI cancellation;SI suppression capability;ferrite circulator;Interference cancellation;Directional couplers;Microstrip;Receiving antennas;Transmitting antennas;Couplings;Circulators;Full-duplex Wireless Communications;Self-Interference Cancellation;Duplexer;Directional Coupler},\n  doi = {10.1109/EUSIPCO.2016.7760223},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570245863.pdf},\n}\n\n
\n
\n\n\n
\n The full-duplex (FD) based devices are capable of concurrently transmitting and receiving signals with a single frequency band. However, a severe self-interference (SI) due to the large difference between the power of the devices' own transmission and that of the signal of interest may be imposed on the FD based devices, thus significantly eroding the received signal-to-interference-plus-noise ratio (SINR). To implement the FD devices, the SI power must be sufficiently suppressed to provide a high-enough received SINR for satisfying the decoding requirement. In this paper, the design and implementation of the duplexer for facilitating SI cancellation in FD based devices are investigated, with a new type of duplexer (i.e. an improved directional coupler) designed and verified. It is shown that the SI suppression capability may be up to 36 dB by using the proposed design, which is much higher than that attainable in the commonly designed ferrite circulator.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Random sampling via sensor networks: Estimation accuracy vs. energy consumption.\n \n \n \n \n\n\n \n Zabini, F.; Calisti, A.; Dardari, D.; and Conti, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 130-134, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RandomPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760224,\n  author = {F. Zabini and A. Calisti and D. Dardari and A. Conti},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Random sampling via sensor networks: Estimation accuracy vs. energy consumption},\n  year = {2016},\n  pages = {130-134},\n  abstract = {The estimation of spatial processes from sparse sensing nodes is fundamental for many applications, including environmental monitoring and crowd-sourcing. In this paper, we analyze the impact of measurement errors on the estimation of a finite-energy signal sampled by a set of sensors randomly deployed in a finite d-dimensional space according to homogeneous Poisson Point Process. The optimal linear space invariant interpolator is derived. Based on such an interpolator, analytical expressions of both the estimated signal energy spectral density and the normalized estimation mean square error are obtained. An asymptotic analysis for high sensors density with respect to the signal bandwidth is given for scenarios subjected to estimation energy constraint. The normalized estimation mean square error is derived for large wireless sensor networks with constraints on the capacity-per-volume and on battery duration.},\n  keywords = {interpolation;mean square error methods;power consumption;signal sampling;stochastic processes;telecommunication power management;wireless sensor networks;wireless sensor networks;random sampling;estimation accuracy;energy consumption;spatial process;sparse sensing nodes;environmental monitoring;crowd-sourcing;measurement errors;finite-energy signal sampling;finite d-dimensional space;homogeneous Poisson point process;optimal linear space invariant interpolator;signal energy spectral density;normalized estimation mean square error;asymptotic analysis;Electrostatic discharges;Estimation;Large scale integration;Wireless sensor networks;Distortion;Measurement errors;Bandwidth},\n  doi = {10.1109/EUSIPCO.2016.7760224},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256346.pdf},\n}\n\n
\n
\n\n\n
\n The estimation of spatial processes from sparse sensing nodes is fundamental for many applications, including environmental monitoring and crowd-sourcing. In this paper, we analyze the impact of measurement errors on the estimation of a finite-energy signal sampled by a set of sensors randomly deployed in a finite d-dimensional space according to homogeneous Poisson Point Process. The optimal linear space invariant interpolator is derived. Based on such an interpolator, analytical expressions of both the estimated signal energy spectral density and the normalized estimation mean square error are obtained. An asymptotic analysis for high sensors density with respect to the signal bandwidth is given for scenarios subjected to estimation energy constraint. The normalized estimation mean square error is derived for large wireless sensor networks with constraints on the capacity-per-volume and on battery duration.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Feature selection and model optimization for semi-supervised speaker spotting.\n \n \n \n \n\n\n \n Chetupalli, S. R.; Gopalakrishnan, A.; and Sreenivas, T. V.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 135-139, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"FeaturePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760225,\n  author = {S. R. Chetupalli and A. Gopalakrishnan and T. V. Sreenivas},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Feature selection and model optimization for semi-supervised speaker spotting},\n  year = {2016},\n  pages = {135-139},\n  abstract = {We explore, experimentally, feature selection and optimization of stochastic model parameters for the problem of speaker spotting. Based on an initially identified segment of speech of a speaker, an iterative model refinement method is developed along with a latent variable mixture model so that segments of the same speaker are identified in a long speech record. It is found that a GMM with moderate number of mixtures is better suited for the task than a large number mixture model as used in speaker identification. Similarly, a PCA based low-dimensional projection of MFCC based feature vector provides better performance. We show that about 6 seconds of initially identified speaker data is sufficient to achieve > 90% performance of speaker segment identification.},\n  keywords = {iterative methods;mixture models;speaker recognition;feature selection;model optimization;semisupervised speaker spotting;stochastic model parameter;speaker speech segmentation;iterative model refinement method;latent variable mixture model;long-speech record;PCA-based low-dimensional projection;MFCC-based feature vector;speaker segment identification;Adaptation models;Speech;Training;Data models;Mel frequency cepstral coefficient;Optimization;Principal component analysis;Speaker spotting;Speaker verification;Speaker diarization;Gaussian mixture model (GMM);Mel-Frequency Cepstral Coefficients (MFCCs)},\n  doi = {10.1109/EUSIPCO.2016.7760225},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256557.pdf},\n}\n\n
\n
\n\n\n
\n We explore, experimentally, feature selection and optimization of stochastic model parameters for the problem of speaker spotting. Based on an initially identified segment of speech of a speaker, an iterative model refinement method is developed along with a latent variable mixture model so that segments of the same speaker are identified in a long speech record. It is found that a GMM with moderate number of mixtures is better suited for the task than a large number mixture model as used in speaker identification. Similarly, a PCA based low-dimensional projection of MFCC based feature vector provides better performance. We show that about 6 seconds of initially identified speaker data is sufficient to achieve > 90% performance of speaker segment identification.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Passive detection of rank-one signals with a multiantenna reference channel.\n \n \n \n \n\n\n \n Santamaria, I.; Scharf, L. L.; Cochran, D.; and Via, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 140-144, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"PassivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760226,\n  author = {I. Santamaria and L. L. Scharf and D. Cochran and J. Via},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Passive detection of rank-one signals with a multiantenna reference channel},\n  year = {2016},\n  pages = {140-144},\n  abstract = {In this work we consider a two-channel passive detection problem, in which there is a surveillance array where the presence/absence of a target signal is to be detected, and a reference array that provides a noise-contaminated version of the target signal. We assume that the transmitted signal is an unknown rank-one signal, and that the noises are uncorrelated between the two channels, but each one having an unknown and arbitrary spatial covariance matrix. We show that the generalized likelihood ratio test (GLRT) for this problem rejects the null hypothesis when the largest canonical correlation of the sample coherence matrix between the surveillance and the reference channels exceeds a threshold. Further, based on recent results from random matrix theory, we provide an approximation for the null distribution of the test statistic.},\n  keywords = {approximation theory;covariance matrices;passive radar;radar detection;radar signal processing;search radar;null distribution;random matrix theory;approximation theory;reference channel;surveillance channel;sample coherence matrix;canonical correlation;GLRT;generalized likelihood ratio test;arbitrary spatial covariance matrix;unknown spatial covariance matrix;noise-contaminated version;surveillance array;two-channel passive detection problem;multiantenna reference channel;rank-one signal passive detection;Covariance matrices;Surveillance;Correlation;Maximum likelihood estimation;Europe;Signal processing;Coherence;Passive detection;generalized likelihood ratio test;reduced-rank;canonical correlations;random matrix theory},\n  doi = {10.1109/EUSIPCO.2016.7760226},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570243281.pdf},\n}\n\n
\n
\n\n\n
\n In this work we consider a two-channel passive detection problem, in which there is a surveillance array where the presence/absence of a target signal is to be detected, and a reference array that provides a noise-contaminated version of the target signal. We assume that the transmitted signal is an unknown rank-one signal, and that the noises are uncorrelated between the two channels, but each one having an unknown and arbitrary spatial covariance matrix. We show that the generalized likelihood ratio test (GLRT) for this problem rejects the null hypothesis when the largest canonical correlation of the sample coherence matrix between the surveillance and the reference channels exceeds a threshold. Further, based on recent results from random matrix theory, we provide an approximation for the null distribution of the test statistic.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse sliding-window RLS adaptive filter with dynamic regularization.\n \n \n \n \n\n\n \n Zakharov, Y. V.; and Nascimento, V. H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 145-149, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760227,\n  author = {Y. V. Zakharov and V. H. Nascimento},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse sliding-window RLS adaptive filter with dynamic regularization},\n  year = {2016},\n  pages = {145-149},\n  abstract = {Recently, a sliding-window RLS (SRLS) adaptive algorithm based on dynamic regularization, involving a time- and tap-varying diagonal loading (VDL), has been proposed, which is equivalent to the proportionate affine projection algorithm (PAPA) used for sparse identification. The complexity of this SRLS algorithm is significantly lower than the PAPA complexity. However, its identification performance (the same as the PAPA performance) does not approach that of the oracle SRLS algorithm. We propose here a new version of the SRLS-VDL algorithm that does achieve next-to-oracle performance. We arrive at this algorithm by minimizing the least squares cost function with a penalty. We also propose a modified penalty that takes into account both the sparsity of the unknown system and additive noise. Numerical results with speech signals in an acoustic echo cancellation scenario show that the proposed algorithm outperforms other sparse estimation techniques and its performance is close to the oracle performance.},\n  keywords = {acoustic signal processing;adaptive filters;echo suppression;least squares approximations;minimisation;recursive filters;speech processing;sparse sliding-window RLS adaptive filter;dynamic regularization;time-varying diagonal loading;tap-varying diagonal loading;proportionate affine projection algorithm;PAPA;sparse identification;SRLS-VDL algorithm;least square cost function minimization;acoustic echo cancellation scenario;speech signal;Signal processing algorithms;Heuristic algorithms;Complexity theory;Loading;Cost function;Europe;Signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760227},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570245965.pdf},\n}\n\n
\n
\n\n\n
\n Recently, a sliding-window RLS (SRLS) adaptive algorithm based on dynamic regularization, involving a time- and tap-varying diagonal loading (VDL), has been proposed, which is equivalent to the proportionate affine projection algorithm (PAPA) used for sparse identification. The complexity of this SRLS algorithm is significantly lower than the PAPA complexity. However, its identification performance (the same as the PAPA performance) does not approach that of the oracle SRLS algorithm. We propose here a new version of the SRLS-VDL algorithm that does achieve next-to-oracle performance. We arrive at this algorithm by minimizing the least squares cost function with a penalty. We also propose a modified penalty that takes into account both the sparsity of the unknown system and additive noise. Numerical results with speech signals in an acoustic echo cancellation scenario show that the proposed algorithm outperforms other sparse estimation techniques and its performance is close to the oracle performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Renormalized maximum likelihood for multivariate autoregressive models.\n \n \n \n \n\n\n \n Maanan, S.; Dumitrescu, B.; and Giurcăneanu, C. D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 150-154, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RenormalizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760228,\n  author = {S. Maanan and B. Dumitrescu and C. D. Giurcăneanu},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Renormalized maximum likelihood for multivariate autoregressive models},\n  year = {2016},\n  pages = {150-154},\n  abstract = {Renormalized maximum likelihood (RNML) is a powerful concept from information theory. We show how it can be used to derive a criterion for selecting the order of vector autoregressive (VAR) processes. We prove that RNML criterion is strongly consistent. We also demonstrate empirically its good performance for examples of VAR which have been considered in recent literature because they possess a particular type of sparsity. In our experiments, we pay a special attention to models for which the inverse spectral density matrix (ISDM) has a specific sparsity pattern. The interest on these models is motivated by the relationship between sparse structure of ISDM and the problem of inferring the conditional independence graph for multivariate time series.},\n  keywords = {autoregressive processes;graph theory;maximum likelihood estimation;spectral analysis;conditional independence graph;ISDM;inverse spectral density matrix;vector autoregressive process;RNML;multivariate autoregressive models;renormalized maximum likelihood;Time series analysis;Covariance matrices;Maximum likelihood estimation;Reactive power;Correlation;Upper bound;Signal processing;Renormalized maximum likelihood;vector autoregressive model;order selection;maximum entropy;convex optimization},\n  doi = {10.1109/EUSIPCO.2016.7760228},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250910.pdf},\n}\n\n
\n
\n\n\n
\n Renormalized maximum likelihood (RNML) is a powerful concept from information theory. We show how it can be used to derive a criterion for selecting the order of vector autoregressive (VAR) processes. We prove that RNML criterion is strongly consistent. We also demonstrate empirically its good performance for examples of VAR which have been considered in recent literature because they possess a particular type of sparsity. In our experiments, we pay a special attention to models for which the inverse spectral density matrix (ISDM) has a specific sparsity pattern. The interest on these models is motivated by the relationship between sparse structure of ISDM and the problem of inferring the conditional independence graph for multivariate time series.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Empirical mode decomposition in a time-scale framework.\n \n \n \n \n\n\n \n Colominas, M. A.; and Schlotthauer, G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 155-159, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EmpiricalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760229,\n  author = {M. A. Colominas and G. Schlotthauer},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Empirical mode decomposition in a time-scale framework},\n  year = {2016},\n  pages = {155-159},\n  abstract = {The analysis of multicomponent signals, made of a small number of amplitude modulated - frequency modulated components that are overlapped in time and frequency, has gained considerable attention in the past years. These signals are often analyzed via Continuous Wavelet Transform (CWT) looking for ridges the components generate on it. In this approach one ridge is equivalent for one mode. The Empirical Mode Decomposition (EMD) is a data-driven method which can separate a signal into components ideally made of several ridges. Unfortunately EMD is defined as an algorithm output, with no analytical definition. It is our purpose to merge the data-driven nature of EMD with the CWT, performing an adaptive signal decomposition in a time-scale framework. We give here a new mode definition, and develop a new mode extraction algorithm. Two artificial signals are analyzed, and results are compared with those of synchrosqueezing ridge-based decomposition, showing advantages for our proposal.},\n  keywords = {adaptive signal processing;wavelet transforms;empirical mode decomposition;time-scale framework;multicomponent signals;frequency modulated components;continuous wavelet transform;CWT;EMD;adaptive signal decomposition;mode extraction algorithm;Continuous wavelet transforms;Empirical mode decomposition;Frequency modulation;Time-frequency analysis;Wavelet analysis},\n  doi = {10.1109/EUSIPCO.2016.7760229},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251222.pdf},\n}\n\n
\n
\n\n\n
\n The analysis of multicomponent signals, made of a small number of amplitude modulated - frequency modulated components that are overlapped in time and frequency, has gained considerable attention in the past years. These signals are often analyzed via Continuous Wavelet Transform (CWT) looking for ridges the components generate on it. In this approach one ridge is equivalent for one mode. The Empirical Mode Decomposition (EMD) is a data-driven method which can separate a signal into components ideally made of several ridges. Unfortunately EMD is defined as an algorithm output, with no analytical definition. It is our purpose to merge the data-driven nature of EMD with the CWT, performing an adaptive signal decomposition in a time-scale framework. We give here a new mode definition, and develop a new mode extraction algorithm. Two artificial signals are analyzed, and results are compared with those of synchrosqueezing ridge-based decomposition, showing advantages for our proposal.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On cyclic spectrum estimation with estimated cycle frequency.\n \n \n \n \n\n\n \n Napolitano, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 160-164, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760230,\n  author = {A. Napolitano},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On cyclic spectrum estimation with estimated cycle frequency},\n  year = {2016},\n  pages = {160-164},\n  abstract = {The problem of cyclic spectrum estimation for almost-cyclostationary processes with unknown cycle frequencies is addressed. This problem arises in spectrum sensing and source location algorithms in the presence of relative motion between transmitter and receiver. Sufficient conditions on the process and the cycle frequency estimator are derived such that frequency-smoothed cyclic periodograms with estimated cycle frequencies are mean-square consistent and asymptotically jointly complex normal. Under the same conditions, the asymptotic complex normal law is shown to coincide with the normal law of the case of known cycle frequencies. Monte Carlo simulations corroborate the effectiveness of the theoretical results.},\n  keywords = {frequency estimation;Monte Carlo methods;radio spectrum management;signal detection;cyclic spectrum estimation;cycle frequency estimation;almost-cyclostationary processes;spectrum sensing;source location algorithms;frequency smoothed cyclic periodogram;mean-square consistent;asymptotic complex normal law;Monte Carlo simulation;Frequency estimation;Signal processing algorithms;Fourier transforms;Correlation;Europe;Spectral analysis;Cyclostationarity;Asymptotic Normality;Doppler effect},\n  doi = {10.1109/EUSIPCO.2016.7760230},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251548.pdf},\n}\n\n
\n
\n\n\n
\n The problem of cyclic spectrum estimation for almost-cyclostationary processes with unknown cycle frequencies is addressed. This problem arises in spectrum sensing and source location algorithms in the presence of relative motion between transmitter and receiver. Sufficient conditions on the process and the cycle frequency estimator are derived such that frequency-smoothed cyclic periodograms with estimated cycle frequencies are mean-square consistent and asymptotically jointly complex normal. Under the same conditions, the asymptotic complex normal law is shown to coincide with the normal law of the case of known cycle frequencies. Monte Carlo simulations corroborate the effectiveness of the theoretical results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Amplitude adaptive ASDM without envelope encoding.\n \n \n \n \n\n\n \n Ozols, K.; and Shavelis, R.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 165-169, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AmplitudePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760231,\n  author = {K. Ozols and R. Shavelis},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Amplitude adaptive ASDM without envelope encoding},\n  year = {2016},\n  pages = {165-169},\n  abstract = {In this paper a method of encoding the signals by using Amplitude Adaptive Asynchronous Sigma-Delta modulator (AA-ASDM) scheme without an additional envelope encoding of the signal is proposed. According to AA-ASDM, the time-varying envelope function of the input signal is used in the feedback loop to reduce the switching rate of the output trigger and thus the power consumption of the circuit. In previous work, the signal and its envelope function were encoded and transmitted separately, thus resulting in inefficiency, since two signals instead of one were required to be transmitted. In order to solve this inefficiency, in this paper it is proposed to select such a time-varying envelope function which does not require additional encoding and transmission, and still be able to recover the original signal from the obtained time sequence. The proposed method is particularly advantageous for signals with wide dynamic range.},\n  keywords = {encoding;sigma-delta modulation;signal processing;amplitude adaptive asynchronous sigma-delta modulator scheme;amplitude adaptive ASDM;envelope encoding;time varying envelope function;Power demand;Sigma-delta modulation;Modulation;Encoding;Switching circuits;Energy consumption;Switches},\n  doi = {10.1109/EUSIPCO.2016.7760231},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251872.pdf},\n}\n\n
\n
\n\n\n
\n In this paper a method of encoding the signals by using Amplitude Adaptive Asynchronous Sigma-Delta modulator (AA-ASDM) scheme without an additional envelope encoding of the signal is proposed. According to AA-ASDM, the time-varying envelope function of the input signal is used in the feedback loop to reduce the switching rate of the output trigger and thus the power consumption of the circuit. In previous work, the signal and its envelope function were encoded and transmitted separately, thus resulting in inefficiency, since two signals instead of one were required to be transmitted. In order to solve this inefficiency, in this paper it is proposed to select such a time-varying envelope function which does not require additional encoding and transmission, and still be able to recover the original signal from the obtained time sequence. The proposed method is particularly advantageous for signals with wide dynamic range.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A unified approach for sparsity-aware and maximum correntropy adaptive filters.\n \n \n \n \n\n\n \n Haddad, D. B.; Petraglia, M. R.; and Petraglia, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 170-174, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760232,\n  author = {D. B. Haddad and M. R. Petraglia and A. Petraglia},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A unified approach for sparsity-aware and maximum correntropy adaptive filters},\n  year = {2016},\n  pages = {170-174},\n  abstract = {Adaptive filters that employ sparse constraints or maximum correntropy criterion (MCC) have been derived from stochastic gradient techniques. This paper provides a deterministic optimization framework which unifies the derivation of such algorithms. The proposed framework has also the ability of providing geometric insights about the adaptive filter updating. New algorithms that exploit both impulse responses sparsity and MCC are proposed, and an estimate of their steady-state MSE is advanced. Simulations show the advantages of the proposed algorithms in the identification of a sparse system with non-Gaussian additive noise.},\n  keywords = {adaptive filters;compressed sensing;gradient methods;mean square error methods;stochastic programming;transient response;maximum correntropy criterion;sparse constraint;MCC;steady-state MSE estimation;maximum correntropy adaptive filter;sparsity-aware adaptive filter;stochastic gradient technique;optimization framework;impulse response sparsity;sparse system identification;nonGaussian additive noise;Signal processing algorithms;Steady-state;Algorithm design and analysis;Mathematical model;Optimization;Europe;Signal processing;Adaptive filtering;maximum correntropy criterium;sparse impulse response;mean-square analysis},\n  doi = {10.1109/EUSIPCO.2016.7760232},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251895.pdf},\n}\n\n
\n
\n\n\n
\n Adaptive filters that employ sparse constraints or maximum correntropy criterion (MCC) have been derived from stochastic gradient techniques. This paper provides a deterministic optimization framework which unifies the derivation of such algorithms. The proposed framework has also the ability of providing geometric insights about the adaptive filter updating. New algorithms that exploit both impulse responses sparsity and MCC are proposed, and an estimate of their steady-state MSE is advanced. Simulations show the advantages of the proposed algorithms in the identification of a sparse system with non-Gaussian additive noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online EM-based distributed estimation in sensor networks with faulty nodes.\n \n \n \n \n\n\n \n Giménez-Febrer, P.; Pagès-Zamora, A.; and López-Valcarce, R.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 175-179, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnlinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760233,\n  author = {P. Giménez-Febrer and A. Pagès-Zamora and R. López-Valcarce},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Online EM-based distributed estimation in sensor networks with faulty nodes},\n  year = {2016},\n  pages = {175-179},\n  abstract = {This paper focuses on the problem of the distributed estimation of a parameter vector based on noisy observations regularly acquired by the nodes of a wireless sensor network and assuming that some of the nodes have faulty sensors. We propose two online schemes, both centralized and distributed, based on the Expectation-Maximization (EM) algorithm. These algorithms are able to identify and disregard the faulty nodes, and provide a refined estimate of the parameters each time instant after a new set of observations is acquired. Simulation results demonstrate that the centralized versions of the proposed online algorithms attain the same estimation error as the centralized batch EM, whereas the distributed versions come very close to matching the batch EM.},\n  keywords = {estimation theory;expectation-maximisation algorithm;wireless sensor networks;wireless sensor network;online EM-based distributed estimation;faulty nodes;parameter vector;noisy observations;expectation-maximization algorithm;Signal processing algorithms;Wireless sensor networks;Estimation;Approximation algorithms;Mathematical model;Europe;Signal processing;online expectation-maximization algorithms;distributed estimation;sensor networks},\n  doi = {10.1109/EUSIPCO.2016.7760233},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251902.pdf},\n}\n\n
\n
\n\n\n
\n This paper focuses on the problem of the distributed estimation of a parameter vector based on noisy observations regularly acquired by the nodes of a wireless sensor network and assuming that some of the nodes have faulty sensors. We propose two online schemes, both centralized and distributed, based on the Expectation-Maximization (EM) algorithm. These algorithms are able to identify and disregard the faulty nodes, and provide a refined estimate of the parameters each time instant after a new set of observations is acquired. Simulation results demonstrate that the centralized versions of the proposed online algorithms attain the same estimation error as the centralized batch EM, whereas the distributed versions come very close to matching the batch EM.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Peak-error-constrained sparse FIR filter design using iterative L1 optimization.\n \n \n \n \n\n\n \n Jiang, A.; Kwan, H. K.; Zhu, Y.; Liu, X.; Xu, N.; and Yao, X.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 180-184, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Peak-error-constrainedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760234,\n  author = {A. Jiang and H. K. Kwan and Y. Zhu and X. Liu and N. Xu and X. Yao},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Peak-error-constrained sparse FIR filter design using iterative L1 optimization},\n  year = {2016},\n  pages = {180-184},\n  abstract = {In this paper, a novel algorithm is presented for the design of sparse linear-phase FIR filters. Compared to traditional l1-optimization-based methods, the proposed algorithm minimizes l1 norm of a portion (instead of all) of nonzero coefficients. In this way, some nonzero coefficients at crucial positions are not affected by l1 norm utilized in the objective function. The proposed algorithm employs an iterative procedure and the index set of these crucial coefficients is updated in each iteration. Simulation results demonstrate that the proposed algorithm can achieve better design results than both greedy methods and traditional l1-optimization-based methods.},\n  keywords = {FIR filters;iterative methods;minimisation;peak-error-constrained sparse FIR filter design;iterative L1 optimization;sparse linear-phase FIR filter;L1 norm minimization;nonzero coefficient;objective function;greedy method;Finite impulse response filters;Algorithm design and analysis;Signal processing algorithms;Optimization;Approximation error;Indexes;Approximation algorithms;FIR filters;iterative l1 optimization;l0 norm;linear program;sparsity},\n  doi = {10.1109/EUSIPCO.2016.7760234},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251921.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a novel algorithm is presented for the design of sparse linear-phase FIR filters. Compared to traditional l1-optimization-based methods, the proposed algorithm minimizes l1 norm of a portion (instead of all) of nonzero coefficients. In this way, some nonzero coefficients at crucial positions are not affected by l1 norm utilized in the objective function. The proposed algorithm employs an iterative procedure and the index set of these crucial coefficients is updated in each iteration. Simulation results demonstrate that the proposed algorithm can achieve better design results than both greedy methods and traditional l1-optimization-based methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sequential low rank representation for blood glucose measurements.\n \n \n \n \n\n\n \n Wahby, K.; Demitri, N.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 185-189, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SequentialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760235,\n  author = {K. Wahby and N. Demitri and A. M. Zoubir},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Sequential low rank representation for blood glucose measurements},\n  year = {2016},\n  pages = {185-189},\n  abstract = {We propose using the Low Rank Representation (LRR) method for segmentation of video frames of glucose concentration measurements taken by a novel setup intended for use in a hand-held device. We propose a sequential LRR algorithm, that corrects the error in the data matrix at each point in time and uses it calculate an f data matrix for the next step. By fixing the error in the data, we are able to segment the data using fewer number of frames at an early stage of the chemical reaction. Our aim is to process incoming frames taken by the camera in real time and use them in a sequential manner to segment the images and estimate the feature value of the region of interest. A comparison of standard LRR and sequential LRR is presented. We evaluate both algorithms on real data sets with respect to goodness of segmentation, as well as accuracy of the feature estimates.},\n  keywords = {cameras;feature extraction;image representation;image segmentation;sugar;video signal processing;blood glucose measurement;sequential low-rank representation;video frame segmentation;hand-held device;sequential LRR algorithm;data matrix;chemical reaction;camera;feature estimation;image segment;Sugar;Image segmentation;Blood;Chemicals;Strips;Image color analysis;Clustering algorithms;Low Rank Representation;spectral clustering;image segmentation;glucose measurement;photometry},\n  doi = {10.1109/EUSIPCO.2016.7760235},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251961.pdf},\n}\n\n
\n
\n\n\n
\n We propose using the Low Rank Representation (LRR) method for segmentation of video frames of glucose concentration measurements taken by a novel setup intended for use in a hand-held device. We propose a sequential LRR algorithm, that corrects the error in the data matrix at each point in time and uses it calculate an f data matrix for the next step. By fixing the error in the data, we are able to segment the data using fewer number of frames at an early stage of the chemical reaction. Our aim is to process incoming frames taken by the camera in real time and use them in a sequential manner to segment the images and estimate the feature value of the region of interest. A comparison of standard LRR and sequential LRR is presented. We evaluate both algorithms on real data sets with respect to goodness of segmentation, as well as accuracy of the feature estimates.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast and accurate cooperative localization in wireless sensor networks.\n \n \n \n \n\n\n \n Scheidt, F.; Jin, D.; Muma, M.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 190-194, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760236,\n  author = {F. Scheidt and D. Jin and M. Muma and A. M. Zoubir},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast and accurate cooperative localization in wireless sensor networks},\n  year = {2016},\n  pages = {190-194},\n  abstract = {Cooperative localization capability is a highly desirable characteristic of wireless sensor networks. It has attracted considerable research attention in academia and industry. The sum-product algorithm over a wireless sensor network (SPAWN) is a powerful method to cooperatively estimate the positions of many sensors (agents) using knowledge of the absolute positions of a few sensors (anchors). Drawbacks of the SPAWN, however, are its high computational complexity and communication load. In this paper we address the complexity issue, reformulate it as convolution problem and utilize the fast Fourier transform (FFT), culminating in a fast and accurate localization algorithm, which we named SPAWN-FFT. Our simulation results show SPAWN-FFT's superiority over SPAWN regarding the computational effort, while maintaining its full flexibility and localization performance.},\n  keywords = {computational complexity;convolution;cooperative communication;fast Fourier transforms;wireless sensor networks;SPAWN-FFT;fast Fourier transform;convolution problem;communication load;computational complexity;positions estimation;sum-product algorithm;cooperative localization capability;wireless sensor network;Kernel;Wireless sensor networks;Complexity theory;Sensors;Convolution;Bandwidth;Interpolation;Cooperative localization;SPAWN;FFT;Kernel bandwidth;Efficient computation},\n  doi = {10.1109/EUSIPCO.2016.7760236},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252104.pdf},\n}\n\n
\n
\n\n\n
\n Cooperative localization capability is a highly desirable characteristic of wireless sensor networks. It has attracted considerable research attention in academia and industry. The sum-product algorithm over a wireless sensor network (SPAWN) is a powerful method to cooperatively estimate the positions of many sensors (agents) using knowledge of the absolute positions of a few sensors (anchors). Drawbacks of the SPAWN, however, are its high computational complexity and communication load. In this paper we address the complexity issue, reformulate it as convolution problem and utilize the fast Fourier transform (FFT), culminating in a fast and accurate localization algorithm, which we named SPAWN-FFT. Our simulation results show SPAWN-FFT's superiority over SPAWN regarding the computational effort, while maintaining its full flexibility and localization performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Impact of noise correlation on multimodality.\n \n \n \n \n\n\n \n Chlaily, S.; Amblard, P.; Michel, O.; and Jutten, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 195-199, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ImpactPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760237,\n  author = {S. Chlaily and P. Amblard and O. Michel and C. Jutten},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Impact of noise correlation on multimodality},\n  year = {2016},\n  pages = {195-199},\n  abstract = {In this paper, we consider the problem of estimating an unknown random scalar observed by two modalities. We study two scenarios using mutual information and mean square error. In the first scenario, we consider that the noise correlation is known and examine its impact on the information content of two modalities. In the second scenario we quantify the information loss when the considered value of the noise correlation is wrong. It is shown that the noise correlation usually enhances the estimation accuracy and increases information. However, the performance declines if the noise correlation is misdefined, and the two modalities may jointly convey less information than one single modality.},\n  keywords = {estimation theory;interference (signal);mean square error methods;signal processing;noise correlation;unknown random scalar estimation;mutual information;mean square error;estimation accuracy;Correlation;Uncertainty;Probability density function;Signal processing;Mutual information;Europe;Random variables},\n  doi = {10.1109/EUSIPCO.2016.7760237},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252173.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we consider the problem of estimating an unknown random scalar observed by two modalities. We study two scenarios using mutual information and mean square error. In the first scenario, we consider that the noise correlation is known and examine its impact on the information content of two modalities. In the second scenario we quantify the information loss when the considered value of the noise correlation is wrong. It is shown that the noise correlation usually enhances the estimation accuracy and increases information. However, the performance declines if the noise correlation is misdefined, and the two modalities may jointly convey less information than one single modality.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Separable autoregressive moving average graph-temporal filters.\n \n \n \n \n\n\n \n Isufi, E.; Loukas, A.; Simonetto, A.; and Leus, G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 200-204, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SeparablePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760238,\n  author = {E. Isufi and A. Loukas and A. Simonetto and G. Leus},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Separable autoregressive moving average graph-temporal filters},\n  year = {2016},\n  pages = {200-204},\n  abstract = {Despite their widespread use for the analysis of graph data, current graph filters are designed for graph signals that do not change over time, and thus they cannot simultaneously process time and graph frequency content in an adequate manner. This work presents ARMA2D, an autoregressive moving average graph-temporal filter that captures jointly the signal variations over the graph and time. By its unique nature, this filter is able to achieve a separable 2-dimensional frequency response, making it possible to approximate the filtering specifications along both the graph and temporal frequency domains. Numerical results show that the proposed solution outperforms the state of the art graph filters when the graph signal is time-varying.},\n  keywords = {autoregressive moving average processes;filtering theory;frequency response;temporal frequency domains;separable 2-dimensional frequency response;signal variations;ARMA2D;graph frequency content;graph signals;graph data;separable autoregressive moving average graph-temporal filters;Frequency response;Laplace equations;Stability analysis;Signal processing;Frequency-domain analysis;Time-domain analysis;Eigenvalues and eigenfunctions;signal processing over graphs;graph filters;separable graph-temporal filters;distributed signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760238},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252217.pdf},\n}\n\n
\n
\n\n\n
\n Despite their widespread use for the analysis of graph data, current graph filters are designed for graph signals that do not change over time, and thus they cannot simultaneously process time and graph frequency content in an adequate manner. This work presents ARMA2D, an autoregressive moving average graph-temporal filter that captures jointly the signal variations over the graph and time. By its unique nature, this filter is able to achieve a separable 2-dimensional frequency response, making it possible to approximate the filtering specifications along both the graph and temporal frequency domains. Numerical results show that the proposed solution outperforms the state of the art graph filters when the graph signal is time-varying.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evaluating dissimilarities between two moving-average models: A comparative study between Jeffrey's divergence and Rao distance.\n \n \n \n \n\n\n \n Legrand, L.; and Grivel, É.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 205-209, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EvaluatingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760239,\n  author = {L. Legrand and É. Grivel},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Evaluating dissimilarities between two moving-average models: A comparative study between Jeffrey's divergence and Rao distance},\n  year = {2016},\n  pages = {205-209},\n  abstract = {The autoregressive models (AR) and moving-average models (MA) are regularly used in signal processing. Previous works have been done on dissimilarity measures between AR models by using a Riemannian distance, the Jeffrey's divergence (JD) and the spectral distances such as the Itakura-Saito divergence. In this paper, we compare the Rao distance and the JD for MA models and more particularly in the case of 1st-order MA models for which an analytical expression of the inverse of the covariance matrix is available. More particularly, we analyze the advantages of the Rao distance use. Secondly, the simulation part compares both dissimilarity measures depending on the MA parameters but also on the number of data available.},\n  keywords = {autoregressive moving average processes;covariance matrices;signal processing;covariance matrix inverse;1st-order MA model;spectral distance;Riemannian distance;signal processing;AR model;autoregressive model;Rao distance;Jeffrey divergence;moving-average model;Computational modeling;Analytical models;Correlation;Signal processing;Manganese;Europe;Covariance matrices;Jeffrey's divergence;Rao distance;moving-average models},\n  doi = {10.1109/EUSIPCO.2016.7760239},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252230.pdf},\n}\n\n
\n
\n\n\n
\n The autoregressive models (AR) and moving-average models (MA) are regularly used in signal processing. Previous works have been done on dissimilarity measures between AR models by using a Riemannian distance, the Jeffrey's divergence (JD) and the spectral distances such as the Itakura-Saito divergence. In this paper, we compare the Rao distance and the JD for MA models and more particularly in the case of 1st-order MA models for which an analytical expression of the inverse of the covariance matrix is available. More particularly, we analyze the advantages of the Rao distance use. Secondly, the simulation part compares both dissimilarity measures depending on the MA parameters but also on the number of data available.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Approximate joint diagonalization within the Riemannian geometry framework.\n \n \n \n \n\n\n \n Bouchard, F.; Korczowski, L.; Malick, J.; and Congedo, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 210-214, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ApproximatePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760240,\n  author = {F. Bouchard and L. Korczowski and J. Malick and M. Congedo},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Approximate joint diagonalization within the Riemannian geometry framework},\n  year = {2016},\n  pages = {210-214},\n  abstract = {We consider the approximate joint diagonalization problem (AJD) related to the well known blind source separation (BSS) problem within the Riemannian geometry framework. We define a new manifold named special polar manifold equivalent to the set of full rank matrices with a unit determinant of their Gram matrix. The Riemannian trust-region optimization algorithm allows us to define a new method to solve the AJD problem. This method is compared to previously published NoJOB and UWEDGE algorithms by means of simulations and shows comparable performances. This Riemannian optimization approach thus shows promising results. Since it is also very flexible, it can be easily extended to block AJD or joint BSS.},\n  keywords = {blind source separation;matrix algebra;optimisation;AJD problem;Riemannian trust-region optimization algorithm;Gram matrix unit determinant;full rank matrix set;polar manifold equivalent;BSS problem;blind source separation problem;approximate joint diagonalization problem;Riemannian geometry framework;Manifolds;Optimization;Geometry;Symmetric matrices;Measurement;Blind source separation;Signal processing algorithms;approximate joint diagonalization;blind source separation;Riemannian geometry;Riemannian optimization;special polar manifold},\n  doi = {10.1109/EUSIPCO.2016.7760240},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252272.pdf},\n}\n\n
\n
\n\n\n
\n We consider the approximate joint diagonalization problem (AJD) related to the well known blind source separation (BSS) problem within the Riemannian geometry framework. We define a new manifold named special polar manifold equivalent to the set of full rank matrices with a unit determinant of their Gram matrix. The Riemannian trust-region optimization algorithm allows us to define a new method to solve the AJD problem. This method is compared to previously published NoJOB and UWEDGE algorithms by means of simulations and shows comparable performances. This Riemannian optimization approach thus shows promising results. Since it is also very flexible, it can be easily extended to block AJD or joint BSS.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Randomized methods for higher-order subspace separation.\n \n \n \n \n\n\n \n da Costa , M. N.; Lopes, R. R.; and Romano, J. M. T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 215-219, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RandomizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760241,\n  author = {M. N. {da Costa} and R. R. Lopes and J. M. T. Romano},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Randomized methods for higher-order subspace separation},\n  year = {2016},\n  pages = {215-219},\n  abstract = {This paper presents an algorithm for signal subspace separation in the context of multidimensional data. The proposal is an extension of the randomized Singular Value Decomposition (SVD) for higher-order tensors. From a set derived from random sampling, we construct an orthogonal basis associated with the range of each mode-space of the input data tensor. Multilinear projection of the input data onto each mode-space then transforms the data to a low-dimensional representation. Finally, we compute the Higher-Order Singular Value Decomposition (HOSVD) of the reduced tensor. Furthermore, we propose an algorithm for computing the randomized HOSVD based on the row-extraction technique. The results reveal a relevant improvement from the standpoint of computational complexity.},\n  keywords = {computational complexity;feature extraction;randomised algorithms;signal sampling;singular value decomposition;tensors;higher-order subspace separation;randomized singular value decomposition method;randomized HOSVD;signal subspace separation;randomized SVD extension;higher-order tensor;random sampling;orthogonal basis;multilinear projection;low-dimensional representation;higher-order singular value decomposition;row-extraction technique;computational complexity;Tensile stress;Matrix decomposition;Singular value decomposition;Europe;Signal processing;Signal processing algorithms;Computational complexity;higher-order singular value decomposition;randomized algorithm;signal subspace method;tensor decomposition;dimension reduction;row-extraction technique},\n  doi = {10.1109/EUSIPCO.2016.7760241},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252275.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents an algorithm for signal subspace separation in the context of multidimensional data. The proposal is an extension of the randomized Singular Value Decomposition (SVD) for higher-order tensors. From a set derived from random sampling, we construct an orthogonal basis associated with the range of each mode-space of the input data tensor. Multilinear projection of the input data onto each mode-space then transforms the data to a low-dimensional representation. Finally, we compute the Higher-Order Singular Value Decomposition (HOSVD) of the reduced tensor. Furthermore, we propose an algorithm for computing the randomized HOSVD based on the row-extraction technique. The results reveal a relevant improvement from the standpoint of computational complexity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An unsupervised approach to glottal inverse filtering.\n \n \n \n \n\n\n \n Ghosh, S.; Laksana, E.; Morency, L.; and Scherer, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 220-224, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760242,\n  author = {S. Ghosh and E. Laksana and L. Morency and S. Scherer},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {An unsupervised approach to glottal inverse filtering},\n  year = {2016},\n  pages = {220-224},\n  abstract = {The extraction of the glottal volume velocity waveform from voiced speech is a well-known example of a sparse signal recovery problem. Prior approaches have mostly used well-engineered speech processing or convex L1-optimization methods to solve the inverse filtering problem. In this paper, we describe a novel approach to modeling the human vocal tract using an unsupervised dictionary learning framework. We make the assumption of an all-pole model of the vocal tract, and derive an L1 regularized least squares loss function for the all-pole approximation. To evaluate the quality of the extracted glottal volume velocity waveform, we conduct experiments on real-life speech datasets, which include vowels and multi-speaker phonetically balanced utterances. We find that the the unsupervised model learns meaningful dictionaries of vocal tracts, and the proposed data-driven unsupervised framework achieves a performance comparable to the IAIF (Iterative Adaptive Inverse Filtering) glottal flow extraction approach.},\n  keywords = {approximation theory;convex programming;feature extraction;filtering theory;least squares approximations;speech processing;unsupervised learning;glottal volume velocity waveform extraction;voiced speech;sparse signal recovery problem;speech processing;convex L1-optimization methods;inverse filtering problem;human vocal tract;unsupervised dictionary learning framework;L1 regularized least squares loss function;all-pole approximation;real-life speech datasets;multi-speaker phonetically balanced utterances;meaningful dictionaries;vocal tracts;data-driven unsupervised framework;IAIF;iterative adaptive inverse filtering glottal flow extraction approach;Speech;Dictionaries;Training;Speech processing;Adaptation models;Filtering;Estimation},\n  doi = {10.1109/EUSIPCO.2016.7760242},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252319.pdf},\n}\n\n
\n
\n\n\n
\n The extraction of the glottal volume velocity waveform from voiced speech is a well-known example of a sparse signal recovery problem. Prior approaches have mostly used well-engineered speech processing or convex L1-optimization methods to solve the inverse filtering problem. In this paper, we describe a novel approach to modeling the human vocal tract using an unsupervised dictionary learning framework. We make the assumption of an all-pole model of the vocal tract, and derive an L1 regularized least squares loss function for the all-pole approximation. To evaluate the quality of the extracted glottal volume velocity waveform, we conduct experiments on real-life speech datasets, which include vowels and multi-speaker phonetically balanced utterances. We find that the the unsupervised model learns meaningful dictionaries of vocal tracts, and the proposed data-driven unsupervised framework achieves a performance comparable to the IAIF (Iterative Adaptive Inverse Filtering) glottal flow extraction approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimal transmission policies for variance based event triggered estimation with an energy harvesting sensor.\n \n \n \n \n\n\n \n Leong, A. S.; Dey, S.; and Quevedo, D. E.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 225-229, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OptimalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760243,\n  author = {A. S. Leong and S. Dey and D. E. Quevedo},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Optimal transmission policies for variance based event triggered estimation with an energy harvesting sensor},\n  year = {2016},\n  pages = {225-229},\n  abstract = {This paper considers a remote state estimation problem where a sensor observes a dynamical process, and transmits local state estimates over an independent and identically distributed (i.i.d.) packet dropping channel to a remote estimator. The sensor is equipped with energy harvesting capabilities. At every discrete time instant, provided there is enough battery energy, the sensor decides whether it should transmit or not, in order to minimize the expected estimation error covariance at the remote estimator. For transmission schedules dependent only on the estimation error covariance at the remote estimator, the energy available at the sensor, and the harvested energy, we establish structural results on the optimal scheduling which show that for a given battery energy level and a given harvested energy, the optimal policy is a threshold policy on the error covariance, i.e. transmit if and only if the error covariance exceeds a certain threshold. Similarly, for a given error covariance and a given harvested energy, the optimal policy is a threshold policy on the battery level. Numerical studies confirm the qualitative behaviour predicted by our structural results.},\n  keywords = {energy harvesting;optimisation;state estimation;telecommunication power management;telecommunication scheduling;wireless sensor networks;estimation error covariance;independent identically distributed packet dropping channel;dynamical process;remote state estimation problem;energy harvesting sensor;variance based event triggered estimation;optimal transmission policies;Batteries;Estimation error;Energy harvesting;Europe;Signal processing;Optimization},\n  doi = {10.1109/EUSIPCO.2016.7760243},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255814.pdf},\n}\n\n
\n
\n\n\n
\n This paper considers a remote state estimation problem where a sensor observes a dynamical process, and transmits local state estimates over an independent and identically distributed (i.i.d.) packet dropping channel to a remote estimator. The sensor is equipped with energy harvesting capabilities. At every discrete time instant, provided there is enough battery energy, the sensor decides whether it should transmit or not, in order to minimize the expected estimation error covariance at the remote estimator. For transmission schedules dependent only on the estimation error covariance at the remote estimator, the energy available at the sensor, and the harvested energy, we establish structural results on the optimal scheduling which show that for a given battery energy level and a given harvested energy, the optimal policy is a threshold policy on the error covariance, i.e. transmit if and only if the error covariance exceeds a certain threshold. Similarly, for a given error covariance and a given harvested energy, the optimal policy is a threshold policy on the battery level. Numerical studies confirm the qualitative behaviour predicted by our structural results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n SIGIBE: Solving random bilinear equations via gradient descent with spectral initialization.\n \n \n \n \n\n\n \n Marques, A. G.; Mateos, G.; and Eldar, Y. C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 230-234, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SIGIBE:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760244,\n  author = {A. G. Marques and G. Mateos and Y. C. Eldar},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {SIGIBE: Solving random bilinear equations via gradient descent with spectral initialization},\n  year = {2016},\n  pages = {230-234},\n  abstract = {We investigate the problem of finding the real-valued vectors h, of size L, and x, of size P, from M independent measurements ym = 〈am, h〉〈bm, x〉, where am and bm are known random vectors. Recovery of the unknowns entails solving a set of bilinear equations, a challenging problem encountered in signal processing tasks such as blind deconvolution for channel equalization or image deblurring. Inspired by the Wirtinger flow approach to the related phase retrieval problem, we propose a solver that proceeds in two steps: (i) first a spectral method is used to obtain an initial guess; which is then (ii) refined using simple and scalable gradient descent iterations to minimize a natural non-convex formulation of the recovery problem. Our method - which we refer to as SIGIBE: Spectral Initialization and Gradient Iterations for Bilinear Equations - can accommodate arbitrary correlations between am and bm. Different from recent approaches to blind deconvolution using convex relaxation, SIGIBE does not require matrix lifting that could hinder the method's scalability. Numerical tests corroborate SIGIBE's effectiveness in various data settings, and show successful recovery with as few as M ≳ (L + P) measurements.},\n  keywords = {deconvolution;gradient methods;vectors;convex relaxation;blind deconvolution;spectral initialization and gradient iterations for bilinear equations;recovery problem natural nonconvex formulation minimization;gradient descent iterations;related phase retrieval problem;Wirtinger flow approach;signal processing tasks;random vectors;real-valued vectors;SIGIBE;Signal processing algorithms;Deconvolution;IP networks;Signal processing;Correlation;Mathematical model;Eigenvalues and eigenfunctions;Bilinear equations;blind deconvolution;non-convex optimization;spectral initialization;correlated data},\n  doi = {10.1109/EUSIPCO.2016.7760244},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251913.pdf},\n}\n\n
\n
\n\n\n
\n We investigate the problem of finding the real-valued vectors h, of size L, and x, of size P, from M independent measurements ym = 〈am, h〉〈bm, x〉, where am and bm are known random vectors. Recovery of the unknowns entails solving a set of bilinear equations, a challenging problem encountered in signal processing tasks such as blind deconvolution for channel equalization or image deblurring. Inspired by the Wirtinger flow approach to the related phase retrieval problem, we propose a solver that proceeds in two steps: (i) first a spectral method is used to obtain an initial guess; which is then (ii) refined using simple and scalable gradient descent iterations to minimize a natural non-convex formulation of the recovery problem. Our method - which we refer to as SIGIBE: Spectral Initialization and Gradient Iterations for Bilinear Equations - can accommodate arbitrary correlations between am and bm. Different from recent approaches to blind deconvolution using convex relaxation, SIGIBE does not require matrix lifting that could hinder the method's scalability. Numerical tests corroborate SIGIBE's effectiveness in various data settings, and show successful recovery with as few as M ≳ (L + P) measurements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improving secrecy rate via cooperative jamming based on Nash Equilibrium.\n \n \n \n \n\n\n \n Zamir, N.; Ali, B.; Butt, M. F. U.; and Ng, S. X.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 235-239, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760245,\n  author = {N. Zamir and B. Ali and M. F. U. Butt and S. X. Ng},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Improving secrecy rate via cooperative jamming based on Nash Equilibrium},\n  year = {2016},\n  pages = {235-239},\n  abstract = {This paper investigates a power control scheme for cooperative cognitive communication system which employs an untrusted relay. More explicitly, a friendly jammer transmits a jamming signal enabling secure communication between the source and the destination, in the presence of an untrusted relay. In return, the source compensates the potential jammer with an access to its bandwidth for a fraction of its time period. In addition, cooperative jammer defines its jamming power through Nash-Equilibrium for improving the secrecy rate. In our proposed scheme, we employ only one jammer and place it on different locations in order to analyze the secrecy rate achieved and the utility of the jammer. Additionally, we fix the positions of the source and the destination while the relay is moved at different locations.},\n  keywords = {cognitive radio;cooperative communication;game theory;jamming;relay networks (telecommunication);telecommunication security;power control scheme;cooperative cognitive communication system;untrusted relay;friendly jammer;jamming signal;cooperative jammer;jamming power;Nash-Equilibrium;secrecy rate;Jamming;Relays;Europe;Signal processing;Wireless communication;Reliability;Nash equilibrium},\n  doi = {10.1109/EUSIPCO.2016.7760245},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256289.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates a power control scheme for cooperative cognitive communication system which employs an untrusted relay. More explicitly, a friendly jammer transmits a jamming signal enabling secure communication between the source and the destination, in the presence of an untrusted relay. In return, the source compensates the potential jammer with an access to its bandwidth for a fraction of its time period. In addition, cooperative jammer defines its jamming power through Nash-Equilibrium for improving the secrecy rate. In our proposed scheme, we employ only one jammer and place it on different locations in order to analyze the secrecy rate achieved and the utility of the jammer. Additionally, we fix the positions of the source and the destination while the relay is moved at different locations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fusion of electroencephalography and functional magnetic resonance imaging to explore epileptic network activity.\n \n \n \n \n\n\n \n Hunyadi, B.; Van Paesschen, W.; De Vos, M.; and Van Huffel, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 240-244, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"FusionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760246,\n  author = {B. Hunyadi and W. {Van Paesschen} and M. {De Vos} and S. {Van Huffel}},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Fusion of electroencephalography and functional magnetic resonance imaging to explore epileptic network activity},\n  year = {2016},\n  pages = {240-244},\n  abstract = {Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) are two complementary modalities capturing a mixture of various underlying neural sources. The fusion of these modalities promises the best of both worlds, i.e. a better resolution in time and space, respectively. Assuming that EEG and fMRI observations are generated by the same mixing system in both modalities, their fusion can be achieved by joint blind source separation (BSS). We solve the joint BSS problem using different variants of joint independent component analysis (jointICA) and coupled matrix-tensor factorization (CMTF). We demonstrate that EEG-fMRI fusion provides a detailed spatio-temporal characterization of an EEG-fMRI dataset recorded in epilepsy patients, leading to new insights in epileptic network behaviour.},\n  keywords = {biomedical MRI;blind source separation;electroencephalography;image fusion;independent component analysis;matrix decomposition;medical image processing;spatiotemporal phenomena;tensors;EEG-fMRI dataset spatio-temporal characterization;epilepsy patient;EEG-fMRI fusion;CMTF;coupled matrix-tensor factorization;jointICA;joint independent component analysis;joint blind source separation;joint BSS problem;neural source mixture;epileptic network activity;functional magnetic resonance imaging fusion;electroencephalography fusion;Electroencephalography;Temporal lobe;Tensile stress;Epilepsy;Signal processing;Europe;Visualization},\n  doi = {10.1109/EUSIPCO.2016.7760246},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255178.pdf},\n}\n\n
\n
\n\n\n
\n Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) are two complementary modalities capturing a mixture of various underlying neural sources. The fusion of these modalities promises the best of both worlds, i.e. a better resolution in time and space, respectively. Assuming that EEG and fMRI observations are generated by the same mixing system in both modalities, their fusion can be achieved by joint blind source separation (BSS). We solve the joint BSS problem using different variants of joint independent component analysis (jointICA) and coupled matrix-tensor factorization (CMTF). We demonstrate that EEG-fMRI fusion provides a detailed spatio-temporal characterization of an EEG-fMRI dataset recorded in epilepsy patients, leading to new insights in epileptic network behaviour.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Classification of fMRI data using dynamic time warping based functional connectivity analysis.\n \n \n \n \n\n\n \n Meszlényi, R.; Peska, L.; Gál, V.; Vidnyánszky, Z.; and Buza, K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 245-249, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ClassificationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760247,\n  author = {R. Meszlényi and L. Peska and V. Gál and Z. Vidnyánszky and K. Buza},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Classification of fMRI data using dynamic time warping based functional connectivity analysis},\n  year = {2016},\n  pages = {245-249},\n  abstract = {The synchronized spontaneous low frequency fluctuations of the BOLD signal, as captured by functional MRI measurements, is known to represent the functional connections of different brain areas. The aforementioned MRI measurements result in high-dimensional time series, the dimensions of which correspond to the activity of different brain regions. Recently we have shown that Dynamic Time Warping (DTW) distance can be used as a similarity measure between BOLD signals of brain regions as an alternative of the traditionally used correlation coefficient. We have characterized the new metric's stability in multiple measurements, and between subjects in homogenous groups. In this paper we investigated the DTW metric's sensitivity and demonstrated that DTW-based models outperform correlation-based models in resting-state fMRI data classification tasks. Additionally, we show that functional connectivity networks resulting from DTW-based models as compared to the correlation-based models are more stable and sensitive to differences between healthy subjects and patient groups.},\n  keywords = {biomedical MRI;brain;fluctuations;neurophysiology;signal classification;synchronisation;time series;fMRI data classification;dynamic time warping based functional connectivity analysis;synchronized spontaneous low-frequency fluctuations;functional MRI measurements;functional connections;brain areas;high-dimensional time series;brain regions;BOLD signals;DTW metric sensitivity;DTW-based models;correlation-based models;resting-state fMRI data classification tasks;functional connectivity networks;Time series analysis;Support vector machines;Time measurement;Correlation;Correlation coefficient;Brain models;fMRI;functional connectivity networks;dynamic time warping;classification},\n  doi = {10.1109/EUSIPCO.2016.7760247},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255287.pdf},\n}\n\n
\n
\n\n\n
\n The synchronized spontaneous low frequency fluctuations of the BOLD signal, as captured by functional MRI measurements, is known to represent the functional connections of different brain areas. The aforementioned MRI measurements result in high-dimensional time series, the dimensions of which correspond to the activity of different brain regions. Recently we have shown that Dynamic Time Warping (DTW) distance can be used as a similarity measure between BOLD signals of brain regions as an alternative of the traditionally used correlation coefficient. We have characterized the new metric's stability in multiple measurements, and between subjects in homogenous groups. In this paper we investigated the DTW metric's sensitivity and demonstrated that DTW-based models outperform correlation-based models in resting-state fMRI data classification tasks. Additionally, we show that functional connectivity networks resulting from DTW-based models as compared to the correlation-based models are more stable and sensitive to differences between healthy subjects and patient groups.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Impact of perceptual learning on resting-state fMRI connectivity: A supervised classification study.\n \n \n \n \n\n\n \n Rahim, M.; Ciuciu, P.; and Bougacha, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 250-254, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ImpactPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760248,\n  author = {M. Rahim and P. Ciuciu and S. Bougacha},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Impact of perceptual learning on resting-state fMRI connectivity: A supervised classification study},\n  year = {2016},\n  pages = {250-254},\n  abstract = {Perceptual learning sculpts ongoing brain activity [1]. This finding has been observed by statistically comparing the functional connectivity (FC) patterns computed from resting-state functional MRI (rs-fMRI) data recorded before and after intensive training to a visual attention task. Hence, functional connectivity serves a dynamic role in brain function, supporting the consolidation of previous experience. Following this line of research, we trained three groups of individuals to a visual discrimination task during a magneto-encephalography (MEG) experiment [2]. The same individuals were then scanned in rs-fMRI. Here, in a supervised classification framework, we demonstrate that FC metrics computed on rs-fMRI data are able to predict the type of training the participants received. On top of that, we show that the prediction accuracies based on tangent embedding FC measure outperform those based on our recently developed multivariate wavelet-based Hurst exponent estimator [3], which captures low frequency fluctuations in ongoing brain activity too.},\n  keywords = {biomedical MRI;fluctuations;learning (artificial intelligence);magnetoencephalography;medical signal processing;signal classification;wavelet transforms;perceptual learning;resting-state fMRI connectivity;brain activity;statistical comparison;functional connectivity patterns;data recording;intensive training;visual attention task;functional connectivity;visual discrimination task;magnetoencephalography;MEG;supervised classification framework;FC metrics;rs-fMRI data;multivariate wavelet-based Hurst exponent estimator;low-frequency fluctuations;Covariance matrices;Visualization;Training;Sparse matrices;Coherence;Measurement;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760248},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255897.pdf},\n}\n\n
\n
\n\n\n
\n Perceptual learning sculpts ongoing brain activity [1]. This finding has been observed by statistically comparing the functional connectivity (FC) patterns computed from resting-state functional MRI (rs-fMRI) data recorded before and after intensive training to a visual attention task. Hence, functional connectivity serves a dynamic role in brain function, supporting the consolidation of previous experience. Following this line of research, we trained three groups of individuals to a visual discrimination task during a magneto-encephalography (MEG) experiment [2]. The same individuals were then scanned in rs-fMRI. Here, in a supervised classification framework, we demonstrate that FC metrics computed on rs-fMRI data are able to predict the type of training the participants received. On top of that, we show that the prediction accuracies based on tangent embedding FC measure outperform those based on our recently developed multivariate wavelet-based Hurst exponent estimator [3], which captures low frequency fluctuations in ongoing brain activity too.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Seizure onset zone localization from many invasive EEG channels using directed functional connectivity.\n \n \n \n \n\n\n \n van Mierlo , P.; Coito, A.; Vulliémoz, S.; and Lie, O.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 255-259, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SeizurePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760249,\n  author = {P. {van Mierlo} and A. Coito and S. Vulliémoz and O. Lie},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Seizure onset zone localization from many invasive EEG channels using directed functional connectivity},\n  year = {2016},\n  pages = {255-259},\n  abstract = {In this study we investigated how directed functional connectivity can be used to localize the seizure onset zone (SOZ) from ictal intracranial EEG (iEEG) recordings. First, simulations were conducted to investigate the performance of two directed functional connectivity measures, the Adaptive Directed Transfer Function (ADTF) and the Adaptive Partial Directed Coherence (APDC), in combinations with two graph measures, the out-degree and the shortest path, to localize the SOZ. Afterwards the method was applied to the seizure of an epileptic patient, recorded with 113-channel iEEG and localization was compared with the subsequent resection that rendered the patient seizure free. We found both in simulations and in the patient data that the ADTF combined with the out-degree and shortest path resulted in correct SOZ localization. We can conclude that ADTF combined with out-degree or shortest path are most optimal to localize the SOZ from a high number of iEEG channels.},\n  keywords = {electroencephalography;onset zone localization;invasive EEG channels;directed functional connectivity;functional connectivity;seizure onset zone;ictal intracranial EEG recordings;adaptive directed transfer function;adaptive partial directed coherence;113-channel iEEG;ADTF;Brain modeling;Electroencephalography;Epilepsy;Electrodes;Delays;Europe;Seizure onset zone localization;directed functional connectivity;epilepsy;intracranial EEG},\n  doi = {10.1109/EUSIPCO.2016.7760249},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256126.pdf},\n}\n\n
\n
\n\n\n
\n In this study we investigated how directed functional connectivity can be used to localize the seizure onset zone (SOZ) from ictal intracranial EEG (iEEG) recordings. First, simulations were conducted to investigate the performance of two directed functional connectivity measures, the Adaptive Directed Transfer Function (ADTF) and the Adaptive Partial Directed Coherence (APDC), in combinations with two graph measures, the out-degree and the shortest path, to localize the SOZ. Afterwards the method was applied to the seizure of an epileptic patient, recorded with 113-channel iEEG and localization was compared with the subsequent resection that rendered the patient seizure free. We found both in simulations and in the patient data that the ADTF combined with the out-degree and shortest path resulted in correct SOZ localization. We can conclude that ADTF combined with out-degree or shortest path are most optimal to localize the SOZ from a high number of iEEG channels.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Total-activation regularized deconvolution of resting-state fMRI leads to reproducible networks with spatial overlap.\n \n \n \n \n\n\n \n Karahanoğlu, F. I.; and Van De Ville, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 260-264, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Total-activationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760250,\n  author = {F. I. Karahanoğlu and D. {Van De Ville}},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Total-activation regularized deconvolution of resting-state fMRI leads to reproducible networks with spatial overlap},\n  year = {2016},\n  pages = {260-264},\n  abstract = {Spontaneous activations in resting-state fMRI have been shown to corroborate recurrent intrinsic functional networks. Recent studies have explored integration of brain function in terms of spatially overlapping networks. We have proposed a method to recover not only spatially but also temporally overlapping networks, which we named innovation-driven co-activation patterns (iCAPs). These networks are driven by the sparse innovation signals recovered from Total Activation (TA), a spatiotemporal regularization framework for fMRI deconvolution. The fMRI data is processed with TA, which uses the inverse of the hemodynamic response function - as a linear differential operator - combined with the derivative in the regularization with ℓ1-norm. As a result, sparse innovation signals are reconstructed as the deconvolved fMRI time series. Temporal clustering of innovation signals lead to iCAPs. In this work, we investigate the reproducible iCAPs in individuals with relapsing-remitting multiple sclerosis and healthy volunteers.},\n  keywords = {biomedical MRI;brain;deconvolution;diseases;haemodynamics;linear differential equations;medical signal processing;neurophysiology;pattern clustering;spatiotemporal phenomena;time series;total-activation regularized deconvolution;resting-state fMRI;reproducible networks;spontaneous activations;recurrent intrinsic functional networks;brain function integration;spatially overlapping networks;temporally overlapping networks;innovation-driven coactivation patterns;sparse innovation signals;total activation;spatiotemporal regularization framework;fMRI deconvolution;fMRI data processing;hemodynamic response function;linear differential operator;ℓ1-norm regularization;deconvolved fMRI time series;temporal clustering;innovation signals;relapsing-remitting multiple sclerosis;Transient analysis;Cost function;Clustering algorithms;Technological innovation;Europe;Signal processing;Imaging},\n  doi = {10.1109/EUSIPCO.2016.7760250},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256339.pdf},\n}\n\n
\n
\n\n\n
\n Spontaneous activations in resting-state fMRI have been shown to corroborate recurrent intrinsic functional networks. Recent studies have explored integration of brain function in terms of spatially overlapping networks. We have proposed a method to recover not only spatially but also temporally overlapping networks, which we named innovation-driven co-activation patterns (iCAPs). These networks are driven by the sparse innovation signals recovered from Total Activation (TA), a spatiotemporal regularization framework for fMRI deconvolution. The fMRI data is processed with TA, which uses the inverse of the hemodynamic response function - as a linear differential operator - combined with the derivative in the regularization with ℓ1-norm. As a result, sparse innovation signals are reconstructed as the deconvolved fMRI time series. Temporal clustering of innovation signals lead to iCAPs. In this work, we investigate the reproducible iCAPs in individuals with relapsing-remitting multiple sclerosis and healthy volunteers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fine tuning consensus optimization for distributed radio interferometric calibration.\n \n \n \n \n\n\n \n Yatawatta, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 265-269, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"FinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760251,\n  author = {S. Yatawatta},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Fine tuning consensus optimization for distributed radio interferometric calibration},\n  year = {2016},\n  pages = {265-269},\n  abstract = {We recently proposed the use of consensus optimization as a viable and effective way to improve the quality of calibration of radio interferometric data. We showed that it is possible to obtain far more accurate calibration solutions and also to distribute the compute load across a network of computers by using this technique. A crucial aspect in any consensus optimization problem is the selection of the penalty parameter used in the alternating direction method of multipliers (ADMM) iterations. This affects the convergence speed as well as the accuracy. In this paper, we use the Hessian of the cost function used in calibration to appropriately select this penalty. We extend our results to a multi-directional calibration setting, where we propose to use a penalty scaled by the squared intensity of each direction.},\n  keywords = {calibration;Hessian matrices;iterative methods;radiowave interferometry;alternating direction method of multipliers iterations;ADMM iterations;Hessian matrices;radio interferometric data;calibration quality;distributed radio interferometric calibration;fine tuning consensus optimization;Calibration;Radio interferometry;Cost function;Arrays;Receivers;Time-frequency analysis;Calibration;Interferometry: Radio interferometry},\n  doi = {10.1109/EUSIPCO.2016.7760251},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250609.pdf},\n}\n\n
\n
\n\n\n
\n We recently proposed the use of consensus optimization as a viable and effective way to improve the quality of calibration of radio interferometric data. We showed that it is possible to obtain far more accurate calibration solutions and also to distribute the compute load across a network of computers by using this technique. A crucial aspect in any consensus optimization problem is the selection of the penalty parameter used in the alternating direction method of multipliers (ADMM) iterations. This affects the convergence speed as well as the accuracy. In this paper, we use the Hessian of the cost function used in calibration to appropriately select this penalty. We extend our results to a multi-directional calibration setting, where we propose to use a penalty scaled by the squared intensity of each direction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Blind calibration of phased arrays using sparsity constraints on the signal model.\n \n \n \n \n\n\n \n Wijnholds, S. J.; and Chiarucci, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 270-274, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"BlindPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760252,\n  author = {S. J. Wijnholds and S. Chiarucci},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Blind calibration of phased arrays using sparsity constraints on the signal model},\n  year = {2016},\n  pages = {270-274},\n  abstract = {Several blind calibration methods have been proposed in a compressive sensing framework to mitigate the detrimental effects of uncertainties in the measurement matrix due to sensor gain and phase errors. Most of these methods operate on the signal domain samples of the receiving elements. This becomes computationally intractable if a large number of time samples is required, for example in low-SNR applications. In this paper, we propose an iterative blind calibration method to estimate the receiver path gains and phases as well as the observed scene from the measured array covariance matrix under the assumption that the observed scene is sparse. We successfully demonstrate the effectiveness of our method using simulated data for a 20-element uniform linear array as well as actual data from a 48-element station (subarray) of the Low Frequency Array (LOFAR) radio astronomical phased array.},\n  keywords = {antenna phased arrays;calibration;compressed sensing;covariance matrices;iterative methods;linear antenna arrays;radioastronomy;sparse matrices;LOFAR radio astronomical phased array;low frequency array radio astronomical phased array;20-element uniform linear array;array covariance matrix;receiver path gain estimation;iterative blind calibration method;low-SNR application;phase errors;sensor gain;measurement matrix;uncertainty detrimental effect mitigation;compressive sensing framework;signal model;sparsity constraint;phased array blind calibration method;Arrays;Covariance matrices;Calibration;Extraterrestrial measurements;Phased arrays;Data models;Array signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760252},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255328.pdf},\n}\n\n
\n
\n\n\n
\n Several blind calibration methods have been proposed in a compressive sensing framework to mitigate the detrimental effects of uncertainties in the measurement matrix due to sensor gain and phase errors. Most of these methods operate on the signal domain samples of the receiving elements. This becomes computationally intractable if a large number of time samples is required, for example in low-SNR applications. In this paper, we propose an iterative blind calibration method to estimate the receiver path gains and phases as well as the observed scene from the measured array covariance matrix under the assumption that the observed scene is sparse. We successfully demonstrate the effectiveness of our method using simulated data for a 20-element uniform linear array as well as actual data from a 48-element station (subarray) of the Low Frequency Array (LOFAR) radio astronomical phased array.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Calibration of radio interferometers using a sparse DoA estimation framework.\n \n \n \n \n\n\n \n Brossard, M.; El Korso, M. N.; Pesavento, M.; Boyer, R.; and Larzabal, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 275-279, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CalibrationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760253,\n  author = {M. Brossard and M. N. {El Korso} and M. Pesavento and R. Boyer and P. Larzabal},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Calibration of radio interferometers using a sparse DoA estimation framework},\n  year = {2016},\n  pages = {275-279},\n  abstract = {The calibration of modern radio interferometers is a significant challenge, specifically at low frequencies. In this perspective, we propose a novel iterative calibration algorithm, which employs the popular sparse representation framework, in the regime where the propagation conditions shift dissimilarly the directions of the sources. More precisely, our algorithm is designed to estimate the apparent directions of the calibration sources, their powers, the directional and undirectional complex gains of the array elements and their noise powers, with a reasonable computational complexity. Numerical simulations reveal that the proposed scheme is statistically efficient at low SNR and even with additional non-calibration sources at unknown directions.},\n  keywords = {antenna arrays;computational complexity;direction-of-arrival estimation;iterative methods;radiotelescopes;radio interferometers;sparse DoA estimation framework;iterative calibration algorithm;sparse representation framework;propagation condition;source direction;undirectional complex gain;directional complex gain;array elements;noise powers;computational complexity;noncalibration source;Calibration;Covariance matrices;Estimation;Signal processing algorithms;Cost function;Sensor arrays;Interferometers;Calibration;radio astronomy;radio interferometer;sensor array;Direction-of-Arrival estimation},\n  doi = {10.1109/EUSIPCO.2016.7760253},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255817.pdf},\n}\n\n
\n
\n\n\n
\n The calibration of modern radio interferometers is a significant challenge, specifically at low frequencies. In this perspective, we propose a novel iterative calibration algorithm, which employs the popular sparse representation framework, in the regime where the propagation conditions shift dissimilarly the directions of the sources. More precisely, our algorithm is designed to estimate the apparent directions of the calibration sources, their powers, the directional and undirectional complex gains of the array elements and their noise powers, with a reasonable computational complexity. Numerical simulations reveal that the proposed scheme is statistically efficient at low SNR and even with additional non-calibration sources at unknown directions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Relaxed concentrated MLE for robust calibration of radio interferometers.\n \n \n \n \n\n\n \n Ollier, V.; El Korso, M. N.; Boyer, R.; Larzabal, P.; and Pesavento, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 280-284, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RelaxedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760254,\n  author = {V. Ollier and M. N. {El Korso} and R. Boyer and P. Larzabal and M. Pesavento},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Relaxed concentrated MLE for robust calibration of radio interferometers},\n  year = {2016},\n  pages = {280-284},\n  abstract = {In this paper, we investigate the calibration of radio interferometers in which Jones matrices are considered to model the interaction between the incident electromagnetic field and the antennas of each station. Specifically, perturbation effects are introduced along the signal path, leading to the conversion of the plane wave into an electric voltage by the receptor. In order to design a robust estimator, the noise is assumed to follow a spherically invariant random process (SIRP). The derived algorithm is based on an iterative relaxed concentrated maximum likelihood estimator (MLE), for which closed-form expressions are obtained for most of the unknown parameters.},\n  keywords = {iterative methods;matrix algebra;maximum likelihood estimation;radiowave interferometers;random processes;relaxation theory;radio interferometer robust calibration;iterative relaxed concentrated MLE;Jones matrices;incident electromagnetic field;station antenna;perturbation effects;spherically invariant random process;SIRP;iterative relaxed concentrated maximum likelihood estimator;Antennas;Maximum likelihood estimation;Calibration;Robustness;Covariance matrices;Signal processing algorithms;Antenna measurements;Calibration;Jones matrices;robustness;SIRP;relaxed concentrated maximum likelihood},\n  doi = {10.1109/EUSIPCO.2016.7760254},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255854.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we investigate the calibration of radio interferometers in which Jones matrices are considered to model the interaction between the incident electromagnetic field and the antennas of each station. Specifically, perturbation effects are introduced along the signal path, leading to the conversion of the plane wave into an electric voltage by the receptor. In order to design a robust estimator, the noise is assumed to follow a spherically invariant random process (SIRP). The derived algorithm is based on an iterative relaxed concentrated maximum likelihood estimator (MLE), for which closed-form expressions are obtained for most of the unknown parameters.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Impact of array calibration on RFI mitigation.\n \n \n \n \n\n\n \n Hellbourg, G.; Abed-Meraim, K.; and Weber, R.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 285-289, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ImpactPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760255,\n  author = {G. Hellbourg and K. Abed-Meraim and R. Weber},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Impact of array calibration on RFI mitigation},\n  year = {2016},\n  pages = {285-289},\n  abstract = {Phased array radio telescopes allow for the filtering of Radio Frequency Interference (RFI) in the spatial domain. Spatial filters are advantageous in radio astronomy when the separation between RFI and astronomical sources cannot be made in the time or frequency domains. Consequently, the mitigation of the RFI relies on the quality of its spatial signature vector (SSV) estimation. The latter depends on the array calibration information which is investigated in this work. More precisely, by using the Cramér-Rao Bound (CRB) tool, we evaluate the astronomical source power estimation error variance in presence of RFI for different array calibration scenarios corresponding to perfectly calibrated, direction-independent uncalibrated and direction-dependent uncalibrated array cases. In addition, we consider in this study the case where only the data covariance information is available and investigate the loss of performance due to the missing data with respect to different system parameters.},\n  keywords = {calibration;interference suppression;radioastronomy;radiofrequency interference;array calibration impact;RFI mitigation;phased array radio telescope;radio frequency interference filtering;spatial domain;spatial filter;radio astronomy;astronomical source;spatial signature vector estimation;SSV estimation;Cramer-Rao bound tool;CRB tool;astronomical source power estimation error variance;Arrays;Phased arrays;Calibration;Radio astronomy;Data models;Estimation;Array signal processing;Cramér-Rao bound;RFI mitigation;Array processing;Radio astronomy},\n  doi = {10.1109/EUSIPCO.2016.7760255},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256080.pdf},\n}\n\n
\n
\n\n\n
\n Phased array radio telescopes allow for the filtering of Radio Frequency Interference (RFI) in the spatial domain. Spatial filters are advantageous in radio astronomy when the separation between RFI and astronomical sources cannot be made in the time or frequency domains. Consequently, the mitigation of the RFI relies on the quality of its spatial signature vector (SSV) estimation. The latter depends on the array calibration information which is investigated in this work. More precisely, by using the Cramér-Rao Bound (CRB) tool, we evaluate the astronomical source power estimation error variance in presence of RFI for different array calibration scenarios corresponding to perfectly calibrated, direction-independent uncalibrated and direction-dependent uncalibrated array cases. In addition, we consider in this study the case where only the data covariance information is available and investigate the loss of performance due to the missing data with respect to different system parameters.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sensitivity of nonlinear precoding to imperfect channel state information in G.fast.\n \n \n \n \n\n\n \n Maes, J.; Nuzman, C.; and Tsiaflakis, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 290-294, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SensitivityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760256,\n  author = {J. Maes and C. Nuzman and P. Tsiaflakis},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Sensitivity of nonlinear precoding to imperfect channel state information in G.fast},\n  year = {2016},\n  pages = {290-294},\n  abstract = {Nonlinear Tomlinson-Harashima precoding has been proposed as a near-optimal interference mitigation technique for downstream transmission in G.fast systems, particularly in the transmit spectrum above 106 MHz. We propose an alternative implementation of Tomlinson-Harashima precoding and examine performance of the common and alternative implementations with respect to quantization errors and imperfect channel state information. We show that Tomlinson-Harashima precoding is more sensitive than optimized linear precoding to varying channel state information due to fluctuations in ambient conditions and sudden changes in termination impedance. We also show that Tomlinson-Harashima precoding only outperforms optimized linear precoding when the channel state information is almost perfectly known.},\n  keywords = {channel capacity;channel coding;digital subscriber lines;nonlinear codes;precoding;quantisation (signal);quantization errors;G.fast systems;downstream transmission;near-optimal interference mitigation technique;nonlinear Tomlinson-Harashima precoding;imperfect channel state information;Precoding;Quantization (signal);Channel state information;Crosstalk;Matrix decomposition;G.fast;nonlinear precoding;Tomlinson-Harashima Precoding;vectoring;sudden termination change},\n  doi = {10.1109/EUSIPCO.2016.7760256},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256207.pdf},\n}\n\n
\n
\n\n\n
\n Nonlinear Tomlinson-Harashima precoding has been proposed as a near-optimal interference mitigation technique for downstream transmission in G.fast systems, particularly in the transmit spectrum above 106 MHz. We propose an alternative implementation of Tomlinson-Harashima precoding and examine performance of the common and alternative implementations with respect to quantization errors and imperfect channel state information. We show that Tomlinson-Harashima precoding is more sensitive than optimized linear precoding to varying channel state information due to fluctuations in ambient conditions and sudden changes in termination impedance. We also show that Tomlinson-Harashima precoding only outperforms optimized linear precoding when the channel state information is almost perfectly known.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Bi-directional beamforming bit error ratio analysis for wireline backhaul networks.\n \n \n \n\n\n \n Ahmed, M. A.; Healy, C. T.; Al Rawi, A. F.; and Tsimenidis, C. C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 295-299, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760257,\n  author = {M. A. Ahmed and C. T. Healy and A. F. {Al Rawi} and C. C. Tsimenidis},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Bi-directional beamforming bit error ratio analysis for wireline backhaul networks},\n  year = {2016},\n  pages = {295-299},\n  abstract = {The next generation of digital subscriber line (DSL) standard will require the development of enabling technologies to exploit currently unused higher frequencies in the very and ultra high frequency bands over a shorter copper drop. At these higher frequencies, the indirect channels produced by the electromagnetic coupling (EMC) between pairs in a binder cable may be as strong as, or stronger than, the direct channels. In this work, we exploit the isomorphism between this wireline environment and the well-studied multipath wireless models to propose a full duplex wired MIMO system for the legacy copper connection in a point-to-point backhaul network. The proposed system achieves self-interference suppression and exploitation of the diversity offered by the EMC channels through a joint interoperable precoding scheme consisting of null space projection (NSP) and maximum ratio combining (MRC). Channel measurements for a 10 pair binder cable are used to evaluate the performance of the proposed system.},\n  keywords = {array signal processing;digital subscriber lines;diversity reception;electromagnetic coupling;error statistics;interference suppression;MIMO communication;multipath channels;precoding;radiofrequency interference;wireless channels;bidirectional beamforming bit error ratio analysis;wireline backhaul network;digital subscriber line standard;DSL standard;very high frequency band;ultra high frequency band;electromagnetic coupling;multipath wireless model;full duplex wired MIMO system;point-to-point backhaul network;self-interference suppression;diversity exploitation;EMC channel;joint interoperable precoding scheme;null space projection;NSP;maximum ratio combining;MRC;binder cable;Diversity reception;Copper;Electromagnetic compatibility;MIMO;DSL;Wires;Interference},\n  doi = {10.1109/EUSIPCO.2016.7760257},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n The next generation of digital subscriber line (DSL) standard will require the development of enabling technologies to exploit currently unused higher frequencies in the very and ultra high frequency bands over a shorter copper drop. At these higher frequencies, the indirect channels produced by the electromagnetic coupling (EMC) between pairs in a binder cable may be as strong as, or stronger than, the direct channels. In this work, we exploit the isomorphism between this wireline environment and the well-studied multipath wireless models to propose a full duplex wired MIMO system for the legacy copper connection in a point-to-point backhaul network. The proposed system achieves self-interference suppression and exploitation of the diversity offered by the EMC channels through a joint interoperable precoding scheme consisting of null space projection (NSP) and maximum ratio combining (MRC). Channel measurements for a 10 pair binder cable are used to evaluate the performance of the proposed system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gigabit DSL: A deep-LMS approach.\n \n \n \n \n\n\n \n Zanko, A.; Bergel, I.; and Leshem, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 300-304, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"GigabitPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760258,\n  author = {A. Zanko and I. Bergel and A. Leshem},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Gigabit DSL: A deep-LMS approach},\n  year = {2016},\n  pages = {300-304},\n  abstract = {In this paper we present the Deep-LMS, a novel algorithm for crosstalk cancellation in DSL. The Deep-LMS crosstalk canceler uses an adaptive non-diagonal preprocessing matrix prior to a conventional LMS crosstalk canceler. The role of the preprocessing matrix is to speed-up the convergence of the conventional LMS crosstalk canceler and hence speed-up the convergence of the overall system. The update of the preprocessing matrix is inspired by deep neural networks. However, since all the operations in the Deep-LMS algorithm are linear, we are capable of providing an exact convergence speed analysis. The Deep-LMS is important for crosstalk cancellation in the novel G.fast standard, where traditional LMS converges very slowly due to the large bandwidth. Simulation results support our analysis and show significant reduction in convergence time compared to existing LMS variants.},\n  keywords = {crosstalk;digital subscriber lines;least mean squares methods;stochastic processes;Gigabit DSL;deep-LMS approach;crosstalk cancellation;adaptive non-diagonal preprocessing matrix;Crosstalk;Convergence;Signal processing algorithms;Signal to noise ratio;DSL;Algorithm design and analysis;Crosstalk canceler;DSL;LMS;G.fast},\n  doi = {10.1109/EUSIPCO.2016.7760258},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256373.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we present the Deep-LMS, a novel algorithm for crosstalk cancellation in DSL. The Deep-LMS crosstalk canceler uses an adaptive non-diagonal preprocessing matrix prior to a conventional LMS crosstalk canceler. The role of the preprocessing matrix is to speed-up the convergence of the conventional LMS crosstalk canceler and hence speed-up the convergence of the overall system. The update of the preprocessing matrix is inspired by deep neural networks. However, since all the operations in the Deep-LMS algorithm are linear, we are capable of providing an exact convergence speed analysis. The Deep-LMS is important for crosstalk cancellation in the novel G.fast standard, where traditional LMS converges very slowly due to the large bandwidth. Simulation results support our analysis and show significant reduction in convergence time compared to existing LMS variants.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Derivative based real-time spectrum coordination for DSL.\n \n \n \n \n\n\n \n Verdyck, J.; Blondia, C.; and Moonen, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 305-309, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DerivativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760259,\n  author = {J. Verdyck and C. Blondia and M. Moonen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Derivative based real-time spectrum coordination for DSL},\n  year = {2016},\n  pages = {305-309},\n  abstract = {In digital subscriber line systems, spectrum coordination is a powerful technique to improve performance. A typical spectrum coordination algorithm employs an iterative procedure to solve the rate adaptive spectrum management problem. These iterative procedures deliver a feasible result only after convergence, and are therefore mostly unable to deal with real-time computational constraints. Recently, the new paradigm of so-called real-time dynamic spectrum management has been defined. This paper presents a simple and powerful framework for real-time spectrum coordination based on bicoordinate ascent methods. This framework is then used to define a novel derivative based real-time spectrum coordination algorithm with provable convergence properties, referred to as fast derivative based iterative power difference balancing (FDB-IPDB). Simulation results show a significant improvement in performance compared to the state of the art.},\n  keywords = {convergence of numerical methods;digital subscriber lines;iterative methods;radio spectrum management;DSL system;derivative-based real-time spectrum coordination;digital subscriber line system;spectrum coordination algorithm;iterative procedure;rate adaptive spectrum management problem;real-time computational constraint;real-time dynamic spectrum management paradigm;bicoordinate ascent method;convergence properties;fast derivative based iterative power difference balancing algorithm;F-DB-IPDB algorithm;Real-time systems;Radio spectrum management;DSL;Crosstalk;Signal to noise ratio;Heuristic algorithms},\n  doi = {10.1109/EUSIPCO.2016.7760259},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256496.pdf},\n}\n\n
\n
\n\n\n
\n In digital subscriber line systems, spectrum coordination is a powerful technique to improve performance. A typical spectrum coordination algorithm employs an iterative procedure to solve the rate adaptive spectrum management problem. These iterative procedures deliver a feasible result only after convergence, and are therefore mostly unable to deal with real-time computational constraints. Recently, the new paradigm of so-called real-time dynamic spectrum management has been defined. This paper presents a simple and powerful framework for real-time spectrum coordination based on bicoordinate ascent methods. This framework is then used to define a novel derivative based real-time spectrum coordination algorithm with provable convergence properties, referred to as fast derivative based iterative power difference balancing (FDB-IPDB). Simulation results show a significant improvement in performance compared to the state of the art.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A fast converging method for common mode sensor based impulse noise cancellation for downstream VDSL.\n \n \n \n \n\n\n \n Ahuja, R.; Gang, A.; Biyani, P.; and Prasad, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 310-315, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760260,\n  author = {R. Ahuja and A. Gang and P. Biyani and S. Prasad},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A fast converging method for common mode sensor based impulse noise cancellation for downstream VDSL},\n  year = {2016},\n  pages = {310-315},\n  abstract = {Impulse noise cancellation using an additional common mode sensor at the customer premises equipment (CPE) receiver is akin to an interference cancellation problem in a SIMO receiver. However, the common mode (CM)-differential mode (DM) cross-correlation for impulse noise signal needs to be estimated during showtime in the presence of a much stronger DM useful data signal. Existing works on this topic rely on the repetitive nature of impulse noise and use a large number of DMT symbols for estimation of the canceler and are therefore not suitable for handling transient noise events. We propose an iterative decision-directed method based on alternating minimization which can provide partial cancellation of the impulse noise using a single DMT symbol (useful for transient noise) and much faster convergence using multiple DMT symbols as compared to existing methods (useful for repetitive impulse noise) and demonstrate its efficacy via simulation.},\n  keywords = {impulse noise;interference suppression;iterative methods;signal processing;fast converging method;common mode sensor based impulse noise cancellation;downstream VDSL;impulse noise signal;iterative decision-directed method;alternating minimization;partial cancellation;Time-domain analysis;Noise cancellation;Estimation;Signal processing algorithms;Couplings;Transient analysis;Frequency-domain analysis},\n  doi = {10.1109/EUSIPCO.2016.7760260},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256290.pdf},\n}\n\n
\n
\n\n\n
\n Impulse noise cancellation using an additional common mode sensor at the customer premises equipment (CPE) receiver is akin to an interference cancellation problem in a SIMO receiver. However, the common mode (CM)-differential mode (DM) cross-correlation for impulse noise signal needs to be estimated during showtime in the presence of a much stronger DM useful data signal. Existing works on this topic rely on the repetitive nature of impulse noise and use a large number of DMT symbols for estimation of the canceler and are therefore not suitable for handling transient noise events. We propose an iterative decision-directed method based on alternating minimization which can provide partial cancellation of the impulse noise using a single DMT symbol (useful for transient noise) and much faster convergence using multiple DMT symbols as compared to existing methods (useful for repetitive impulse noise) and demonstrate its efficacy via simulation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Kronecker covariance sketching for spatial-temporal data.\n \n \n \n \n\n\n \n Chi, Y.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 316-320, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"KroneckerPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760261,\n  author = {Y. Chi},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Kronecker covariance sketching for spatial-temporal data},\n  year = {2016},\n  pages = {316-320},\n  abstract = {Covariance sketching has been recently introduced as an effective strategy to reduce the data dimensionality without sacrificing the ability to reconstruct second-order statistics of the data. In this paper, we propose a novel covariance sketching scheme with reduced complexity for spatial-temporal data, whose covariance matrices satisfy the Kronecker product expansion model recently introduced by Tsiligkaridis and Hero. Our scheme is based on quadratic sampling that only requires magnitude measurements, hence is appealing for applications when phase information is difficult to obtain, such as wideband spectrum sensing and optical imaging. We propose to estimate the covariance matrix based on convex relaxation when the separation rank is small, and when the temporal covariance is additionally Toeplitz structured. Numerical examples are provided to demonstrate the effectiveness of the proposed scheme.},\n  keywords = {convex programming;covariance matrices;mathematics computing;sampling methods;statistics;Kronecker covariance sketching;spatial-temporal data modeling;data dimensionality reduction;second-order statistics;covariance matrices;kronecker product;quadratic sampling;convex relaxation;Covariance matrices;Data models;Correlation;Signal processing;Europe;Sensors;Complexity theory;spatial-temporal data modeling;covariance sketching;kronecker product;convex optimization},\n  doi = {10.1109/EUSIPCO.2016.7760261},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570246606.pdf},\n}\n\n
\n
\n\n\n
\n Covariance sketching has been recently introduced as an effective strategy to reduce the data dimensionality without sacrificing the ability to reconstruct second-order statistics of the data. In this paper, we propose a novel covariance sketching scheme with reduced complexity for spatial-temporal data, whose covariance matrices satisfy the Kronecker product expansion model recently introduced by Tsiligkaridis and Hero. Our scheme is based on quadratic sampling that only requires magnitude measurements, hence is appealing for applications when phase information is difficult to obtain, such as wideband spectrum sensing and optical imaging. We propose to estimate the covariance matrix based on convex relaxation when the separation rank is small, and when the temporal covariance is additionally Toeplitz structured. Numerical examples are provided to demonstrate the effectiveness of the proposed scheme.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimal choice of Hankel-block-Hankel matrix shape in 2-D parameter estimation: The rank-one case.\n \n \n \n \n\n\n \n Sahnoun, S.; Usevich, K.; and Comon, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 321-325, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OptimalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760262,\n  author = {S. Sahnoun and K. Usevich and P. Comon},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Optimal choice of Hankel-block-Hankel matrix shape in 2-D parameter estimation: The rank-one case},\n  year = {2016},\n  pages = {321-325},\n  abstract = {In this paper we analyse the performance of 2-D ESPRIT method for estimating parameters of 2-D superimposed damped exponentials. 2-D ESPRIT algorithm is based on low-rank decomposition of a Hankel-block-Hankel matrix that is formed by the 2-D data. Through a first-order perturbation analysis, we derive closed-form expressions for the variances of the complex modes, frequencies and damping factors estimates in the 2-D single-tone case. This analysis allows to define the optimal parameters used in the construction of the Hankel-block-Hankel matrix. A fast algorithm for calculating the SVD of Hankel-block-Hankel matrices is also used to enhance the computational complexity of the 2-D ESPRIT algorithm.},\n  keywords = {Hankel matrices;parameter estimation;singular value decomposition;Hankel-block-Hankel matrix shape;2D parameter estimation;rank-one case;2D ESPRIT method;2D superimposed damped exponentials;low-rank decomposition;first-order perturbation analysis;closed-form expressions;complex modes;damping factors;2D single-tone case;optimal parameters;singular value decomposition;SVD;computational complexity;Signal processing algorithms;Europe;Signal processing;Algorithm design and analysis;Closed-form solutions;Damping;Reactive power;Frequency estimation;Hankel-block-Hankel matrix;2-D ESPRIT;perturbation analysis},\n  doi = {10.1109/EUSIPCO.2016.7760262},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255656.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we analyse the performance of 2-D ESPRIT method for estimating parameters of 2-D superimposed damped exponentials. 2-D ESPRIT algorithm is based on low-rank decomposition of a Hankel-block-Hankel matrix that is formed by the 2-D data. Through a first-order perturbation analysis, we derive closed-form expressions for the variances of the complex modes, frequencies and damping factors estimates in the 2-D single-tone case. This analysis allows to define the optimal parameters used in the construction of the Hankel-block-Hankel matrix. A fast algorithm for calculating the SVD of Hankel-block-Hankel matrices is also used to enhance the computational complexity of the 2-D ESPRIT algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On a fixed-point algorithm for structured low-rank approximation and estimation of half-life parameters.\n \n \n \n \n\n\n \n Andersson, F.; Carlsson, M.; and Wendt, H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 326-330, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760263,\n  author = {F. Andersson and M. Carlsson and H. Wendt},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On a fixed-point algorithm for structured low-rank approximation and estimation of half-life parameters},\n  year = {2016},\n  pages = {326-330},\n  abstract = {We study the problem of decomposing a measured signal as a sum of decaying exponentials. There is a direct connection to sums of these types and positive semi-definite (PSD) Hankel matrices, where the rank of these matrices equals the number of exponentials. We propose to solve the identification problem by forming an optimization problem with a misfit function combined with a rank penalty function that also ensures the PSD-constraint. This problem is non-convex, but we show that it is possible to compute the minimum of an explicit closely related convexified problem. Moreover, this minimum can be shown to often coincide with the minimum of the original non-convex problem, and we provide a simple criterion that enables to verify if this is the case.},\n  keywords = {optimisation;signal processing;structured low-rank approximation;half-life parameter estimation;positive semidefinite Hankel matrices;optimization problem;rank penalty function;PSD-constraint;Signal processing algorithms;Eigenvalues and eigenfunctions;Hafnium;Estimation;Europe;Signal processing;Matrix decomposition;Low rank approximation;structured matrices;fixed-point algorithms},\n  doi = {10.1109/EUSIPCO.2016.7760263},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255757.pdf},\n}\n\n
\n
\n\n\n
\n We study the problem of decomposing a measured signal as a sum of decaying exponentials. There is a direct connection to sums of these types and positive semi-definite (PSD) Hankel matrices, where the rank of these matrices equals the number of exponentials. We propose to solve the identification problem by forming an optimization problem with a misfit function combined with a rank penalty function that also ensures the PSD-constraint. This problem is non-convex, but we show that it is possible to compute the minimum of an explicit closely related convexified problem. Moreover, this minimum can be shown to often coincide with the minimum of the original non-convex problem, and we provide a simple criterion that enables to verify if this is the case.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Solving physics-driven inverse problems via structured least squares.\n \n \n \n \n\n\n \n Murray-Bruce, J.; and Dragotti, P. L.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 331-335, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SolvingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760264,\n  author = {J. Murray-Bruce and P. L. Dragotti},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Solving physics-driven inverse problems via structured least squares},\n  year = {2016},\n  pages = {331-335},\n  abstract = {Numerous physical phenomena are well modeled by partial differential equations (PDEs); they describe a wide range of phenomena across many application domains, from modeling EEG signals in electroencephalography to, modeling the release and propagation of toxic substances in environmental monitoring. In these applications it is often of interest to find the sources of the resulting phenomena, given some sparse sensor measurements of it. This will be the main task of this work. Specifically, we will show that finding the sources of such PDE-driven fields can be turned into solving a class of well-known multi-dimensional structured least squares problems. This link is achieved by leveraging from recent results in modern sampling theory - in particular, the approximate Strang-Fix theory. Subsequently, numerical simulation results are provided in order to demonstrate the validity and robustness of the proposed framework.},\n  keywords = {electroencephalography;environmental monitoring (geophysics);inverse problems;least squares approximations;partial differential equations;signal sampling;toxicology;physics-driven inverse problem;partial differential equation;PDE;EEG signal modeling;electroencephalography;toxic substance propagation;environmental monitoring;sparse sensor measurement;multidimensional structured least square problem;sampling theory;approximate Strang-Fix theory;numerical simulation;Green's function methods;Sensors;Mathematical model;Inverse problems;Europe;Signal processing;Brain modeling;Spatiotemporal sampling;sensor networks;inverse source problems;structured least squares;Prony's method;finite rate of innovation (FRI)},\n  doi = {10.1109/EUSIPCO.2016.7760264},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255808.pdf},\n}\n\n
\n
\n\n\n
\n Numerous physical phenomena are well modeled by partial differential equations (PDEs); they describe a wide range of phenomena across many application domains, from modeling EEG signals in electroencephalography to, modeling the release and propagation of toxic substances in environmental monitoring. In these applications it is often of interest to find the sources of the resulting phenomena, given some sparse sensor measurements of it. This will be the main task of this work. Specifically, we will show that finding the sources of such PDE-driven fields can be turned into solving a class of well-known multi-dimensional structured least squares problems. This link is achieved by leveraging from recent results in modern sampling theory - in particular, the approximate Strang-Fix theory. Subsequently, numerical simulation results are provided in order to demonstrate the validity and robustness of the proposed framework.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Convex super-resolution detection of lines in images.\n \n \n \n \n\n\n \n Polisano, K.; Condat, L.; Clausel, M.; and Perrier, V.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 336-340, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ConvexPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760265,\n  author = {K. Polisano and L. Condat and M. Clausel and V. Perrier},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Convex super-resolution detection of lines in images},\n  year = {2016},\n  pages = {336-340},\n  abstract = {In this paper, we present a new convex formulation for the problem of recovering lines in degraded images. Following the recent paradigm of super-resolution, we formulate a dedicated atomic norm penalty and we solve this optimization problem by means of a primal-dual algorithm. This parsimonious model enables the reconstruction of lines from lowpass measurements, even in presence of a large amount of noise or blur. Furthermore, a Prony method performed on rows and columns of the restored image, provides a spectral estimation of the line parameters, with subpixel accuracy.},\n  keywords = {image resolution;image restoration;optimisation;spectral estimation;restored image;lowpass measurements;line reconstruction;parsimonious model;primal-dual algorithm;optimization problem;dedicated atomic norm penalty;degraded images;recovering line problem;convex formulation;convex super-resolution detection;Image resolution;Convolution;Atomic measurements;Optimization;Signal resolution;Image reconstruction;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760265},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256239.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present a new convex formulation for the problem of recovering lines in degraded images. Following the recent paradigm of super-resolution, we formulate a dedicated atomic norm penalty and we solve this optimization problem by means of a primal-dual algorithm. This parsimonious model enables the reconstruction of lines from lowpass measurements, even in presence of a large amount of noise or blur. Furthermore, a Prony method performed on rows and columns of the restored image, provides a spectral estimation of the line parameters, with subpixel accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Iterative least squares algorithm for inverse problem in microwave medical imaging.\n \n \n \n \n\n\n \n Azghani, M.; and Marvasti, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 341-344, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"IterativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760266,\n  author = {M. Azghani and F. Marvasti},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Iterative least squares algorithm for inverse problem in microwave medical imaging},\n  year = {2016},\n  pages = {341-344},\n  abstract = {The inverse problem in MicroWave Imaging (MWI) is an ill-posed one which can be solved with the aid of the sparsity prior of the solution. In this paper, an Iterative Least Squares Algorithm (ILSA) has been proposed as an inverse solver in MWI which seeks for the sparse vector satisfying the problem constraints. Minimizing a least squares cost function, we derive a relatively simple iterative algorithm which enforces the sparsity gradually with the aid of a reweighting operator. The simulation results confirm the superiority of the suggested method compared to the state-of-the-art schemes in the quality of the recovered breast tumors in the microwave images.},\n  keywords = {biomedical imaging;inverse problems;iterative methods;least squares approximations;microwave imaging;tumours;breast tumor;least squares cost function;sparse vector;inverse solver;ILSA;MWI;microwave medical imaging;inverse problem;iterative least squares algorithm;Microwave imaging;Inverse problems;Breast;Microwave communication;Signal processing algorithms;Microwave tomography;sparse signal processing;sparsity;inverse scattering;Microwave imaging technique},\n  doi = {10.1109/EUSIPCO.2016.7760266},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256088.pdf},\n}\n\n
\n
\n\n\n
\n The inverse problem in MicroWave Imaging (MWI) is an ill-posed one which can be solved with the aid of the sparsity prior of the solution. In this paper, an Iterative Least Squares Algorithm (ILSA) has been proposed as an inverse solver in MWI which seeks for the sparse vector satisfying the problem constraints. Minimizing a least squares cost function, we derive a relatively simple iterative algorithm which enforces the sparsity gradually with the aid of a reweighting operator. The simulation results confirm the superiority of the suggested method compared to the state-of-the-art schemes in the quality of the recovered breast tumors in the microwave images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Electromagnetic retrieval of missing fibers in periodic fibered laminates via sparsity concepts.\n \n \n \n \n\n\n \n Liu, Z.; Li, C.; Lesselier, D.; and Zhong, Y.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 345-349, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ElectromagneticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760267,\n  author = {Z. Liu and C. Li and D. Lesselier and Y. Zhong},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Electromagnetic retrieval of missing fibers in periodic fibered laminates via sparsity concepts},\n  year = {2016},\n  pages = {345-349},\n  abstract = {Electromagnetic modeling and imaging of fibered laminates with some fibers missing is investigated, this extending to similarly organized photonic crystals. Parallel circular cylinders are periodically set in a homogeneous layer (matrix) sandwiched between two homogeneous half-spaces. Absent fibers destroy the periodicity. An auxiliary periodic structure (supercell) provides a subsidiary model considered using method tailored to standard periodic structures involving the Floquet theorem to decompose the fields. Imaging approaches from the Lippman-Schwinger integral field formulation as one-shot MUltiple SIgnal Classification (MUSIC) with pointwise scatterers assumptions and an iterative, sparsity-constrained solution are developed. Numerical simulations illustrate the direct model and imaging.},\n  keywords = {integral equations;laminates;mechanical engineering computing;signal classification;electromagnetic retrieval;missing fibers;periodic fibered laminates;sparsity concept;electromagnetic modeling;photonic crystals;parallel circular cylinders;homogeneous layer;auxiliary periodic structure;Floquet theorem;Lippman-Schwinger integral field formulation;one-shot multiple signal classification;MUSIC;pointwise scatterers;iterative sparsity-constrained solution;numerical simulation;Imaging;Laminates;Periodic structures;Europe;Signal processing;Transmission line matrix methods;Multiple signal classification},\n  doi = {10.1109/EUSIPCO.2016.7760267},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256150.pdf},\n}\n\n
\n
\n\n\n
\n Electromagnetic modeling and imaging of fibered laminates with some fibers missing is investigated, this extending to similarly organized photonic crystals. Parallel circular cylinders are periodically set in a homogeneous layer (matrix) sandwiched between two homogeneous half-spaces. Absent fibers destroy the periodicity. An auxiliary periodic structure (supercell) provides a subsidiary model considered using method tailored to standard periodic structures involving the Floquet theorem to decompose the fields. Imaging approaches from the Lippman-Schwinger integral field formulation as one-shot MUltiple SIgnal Classification (MUSIC) with pointwise scatterers assumptions and an iterative, sparsity-constrained solution are developed. Numerical simulations illustrate the direct model and imaging.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparsity-enforced microwave inverse scattering using soft shrinkage thresholding.\n \n \n \n \n\n\n \n Zaimaga, H.; and Lambert, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 350-354, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Sparsity-enforcedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760268,\n  author = {H. Zaimaga and M. Lambert},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Sparsity-enforced microwave inverse scattering using soft shrinkage thresholding},\n  year = {2016},\n  pages = {350-354},\n  abstract = {A sparse nonlinear inverse scattering problem arising in microwave imaging is analyzed and numerically solved for retrieving dielectric contrast of region of interest from measured fields. The proposed approach is motivated by a Tikhonov functional incorporating a sparsity promoting l1-penalty term. The proposed iterative algorithm of soft shrinkage type enforces the sparsity constraint at each nonlinear iteration and provides an effective reconstructions of unknown (complex) dielectric profiles. The scheme produces sharp and good reconstruction of dielectric profiles in sparse domains and keeps its convergence during the reconstruction. Numerical results present the effectiveness and accuracy of the proposed method.},\n  keywords = {dielectric properties;electromagnetic wave scattering;inverse problems;iterative methods;microwave imaging;sparsity-enforced microwave inverse scattering;soft shrinkage thresholding;sparse nonlinear inverse scattering problem;microwave imaging;dielectric contrast;Tikhonov functional;iterative algorithm;dielectric profiles;Inverse problems;Dielectrics;Permittivity;Convergence;Signal processing algorithms;Iterative methods;Image reconstruction},\n  doi = {10.1109/EUSIPCO.2016.7760268},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256250.pdf},\n}\n\n
\n
\n\n\n
\n A sparse nonlinear inverse scattering problem arising in microwave imaging is analyzed and numerically solved for retrieving dielectric contrast of region of interest from measured fields. The proposed approach is motivated by a Tikhonov functional incorporating a sparsity promoting l1-penalty term. The proposed iterative algorithm of soft shrinkage type enforces the sparsity constraint at each nonlinear iteration and provides an effective reconstructions of unknown (complex) dielectric profiles. The scheme produces sharp and good reconstruction of dielectric profiles in sparse domains and keeps its convergence during the reconstruction. Numerical results present the effectiveness and accuracy of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse microwave breast imaging with differently polarized arrays.\n \n \n \n \n\n\n \n Stevanovic, M. N.; Dinkic, J.; Music, J.; and Nehorai, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 355-358, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760269,\n  author = {M. N. Stevanovic and J. Dinkic and J. Music and A. Nehorai},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse microwave breast imaging with differently polarized arrays},\n  year = {2016},\n  pages = {355-358},\n  abstract = {We investigate the role of polarization in sparse differential microwave imaging for the breast-cancer localization. We consider two types of antenna arrays, placed around realistic inhomogeneous breast models. In the first case, the antennas are vertical with respect to the chest wall, whereas in the second case, the antennas are located in the horizontal planes, parallel to the chest wall. In the approximate linear model, we use numerically computed three-dimensional (3-D) Green's functions, assuming that the breast tissue parameters are known from the previous measurements. By introducing some deviation in the permittivity of the breast tissues, we compare the estimation accuracy yielded by different array configurations and assess the robustness of the sparse approach.},\n  keywords = {antenna arrays;biological tissues;cancer;Green's function methods;medical image processing;microwave imaging;permittivity;sparse microwave breast imaging;polarized array;sparse differential microwave imaging;breast-cancer localization;antenna array;realistic inhomogeneous breast models;chest wall;horizontal plane;approximate linear model;three-dimensional Green's function;3D Green's function;breast tissue parameter;breast tissue permittivity;Breast;Antenna arrays;Microwave imaging;Permittivity;Lesions;Microwave theory and techniques;microwave imaging;sparse processing;breast cancer localization},\n  doi = {10.1109/EUSIPCO.2016.7760269},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256333.pdf},\n}\n\n
\n
\n\n\n
\n We investigate the role of polarization in sparse differential microwave imaging for the breast-cancer localization. We consider two types of antenna arrays, placed around realistic inhomogeneous breast models. In the first case, the antennas are vertical with respect to the chest wall, whereas in the second case, the antennas are located in the horizontal planes, parallel to the chest wall. In the approximate linear model, we use numerically computed three-dimensional (3-D) Green's functions, assuming that the breast tissue parameters are known from the previous measurements. By introducing some deviation in the permittivity of the breast tissues, we compare the estimation accuracy yielded by different array configurations and assess the robustness of the sparse approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Minimum measurement deterministic compressed sensing based on complex reed solomon decoding.\n \n \n \n \n\n\n \n Schnier, T.; Bockelmann, C.; and Dekorsy, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 359-363, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MinimumPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760270,\n  author = {T. Schnier and C. Bockelmann and A. Dekorsy},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Minimum measurement deterministic compressed sensing based on complex reed solomon decoding},\n  year = {2016},\n  pages = {359-363},\n  abstract = {Compressed Sensing (CS) is an emerging field in communications and mathematics that is used to measure few measurements of long sparse vectors with the ability of lossless reconstruction. In this paper we use results from channel coding to design a recovery algorithm for CS with a deterministic measurement matrix by exploiting error correction schemes. In particular, we show that a generalized Reed Solomon encoding-decoding structure can be used to measure sparsely representable vectors, that are sparse in some fitting basis, down to the theoretical minimum number of measurements with the ability of guaranteed lossless reconstruction, even in the low dimensional case.},\n  keywords = {channel coding;compressed sensing;decoding;error correction codes;matrix algebra;Reed-Solomon codes;minimum measurement deterministic compressed sensing;complex Reed Solomon decoding;CS recovery algorithm;long sparse vector measurement;channel coding;measurement matrix;error correction scheme;generalized Reed-Solomon encoding-decoding structure;sparsely representable vector measure;fitting basis;lossless reconstruction;Sparse matrices;Reed-Solomon codes;Decoding;Signal processing algorithms;Loss measurement;Compressed sensing;Sensors;Compressed Sensing;Reed Solomon;Deterministic;Sparsity},\n  doi = {10.1109/EUSIPCO.2016.7760270},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570247574.pdf},\n}\n\n
\n
\n\n\n
\n Compressed Sensing (CS) is an emerging field in communications and mathematics that is used to measure few measurements of long sparse vectors with the ability of lossless reconstruction. In this paper we use results from channel coding to design a recovery algorithm for CS with a deterministic measurement matrix by exploiting error correction schemes. In particular, we show that a generalized Reed Solomon encoding-decoding structure can be used to measure sparsely representable vectors, that are sparse in some fitting basis, down to the theoretical minimum number of measurements with the ability of guaranteed lossless reconstruction, even in the low dimensional case.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust compressive shift retrieval in linear time.\n \n \n \n \n\n\n \n Clausen, M.; and Kurth, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 364-368, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760271,\n  author = {M. Clausen and F. Kurth},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust compressive shift retrieval in linear time},\n  year = {2016},\n  pages = {364-368},\n  abstract = {Suppose two finite signals are related by an unknown cyclic shift. Fast algorithms for finding such a shift or variants thereof are of great importance in a number of applications, e.g., localization and target tracking using acoustic sensors. The standard solution, solving shift finding by maximizing the cross-correlation between the two signals, may be rather efficiently computed using fast Fourier transforms (FFTs). Inspired by compressive sensing, faster algorithms have been recently proposed based on sparse FFTs. In this paper, we transform the shift finding problem into the spectral domain as well. As a first contribution, by combining the Fourier Shift Theorem with the Bézout Identity from elementary number theory, we obtain explicit formulas for the unknown shift parameter. This leads to linear time algorithms for shift finding in the noise-free setting. As a second contribution, we extend this result to the fast recovery of weighted sums of two shifts. Furthermore, we introduce a novel iterative algorithm for estimation of the unknown shift parameter for the case of noisy signals and provide a sufficient criterion for exact shift recovery. A slightly relaxed criterion leads to a linear time median algorithm in the noisy setting with high recovery rates even for low SNRs.},\n  keywords = {compressed sensing;fast Fourier transforms;iterative methods;number theory;parameter estimation;compressive shift retrieval;cyclic shift;signal cross-correlation maximization;fast Fourier transform;FFT;compressive sensing;shift finding problem;spectral domain;Fourier shift theorem;Bezout Identity;elementary number theory;noise-free setting;weighted sums recovery;iterative algorithm;shift recovery;linear time median algorithm;SNR;unknown shift parameter estimation;Signal processing algorithms;Fourier transforms;Noise measurement;Compressed sensing;Estimation;Europe;Signal processing;Shift Retrieval;Compressive Sensing;Fourier Transform;Bézout Identity;TDE;TOA;TDOA},\n  doi = {10.1109/EUSIPCO.2016.7760271},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250744.pdf},\n}\n\n
\n
\n\n\n
\n Suppose two finite signals are related by an unknown cyclic shift. Fast algorithms for finding such a shift or variants thereof are of great importance in a number of applications, e.g., localization and target tracking using acoustic sensors. The standard solution, solving shift finding by maximizing the cross-correlation between the two signals, may be rather efficiently computed using fast Fourier transforms (FFTs). Inspired by compressive sensing, faster algorithms have been recently proposed based on sparse FFTs. In this paper, we transform the shift finding problem into the spectral domain as well. As a first contribution, by combining the Fourier Shift Theorem with the Bézout Identity from elementary number theory, we obtain explicit formulas for the unknown shift parameter. This leads to linear time algorithms for shift finding in the noise-free setting. As a second contribution, we extend this result to the fast recovery of weighted sums of two shifts. Furthermore, we introduce a novel iterative algorithm for estimation of the unknown shift parameter for the case of noisy signals and provide a sufficient criterion for exact shift recovery. A slightly relaxed criterion leads to a linear time median algorithm in the noisy setting with high recovery rates even for low SNRs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Regularized low-coherence overcomplete dictionary learning for sparse signal decomposition.\n \n \n \n\n\n \n Sadeghi, M.; Babaie-Zadeh, M.; and Jutten, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 369-373, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760272,\n  author = {M. Sadeghi and M. Babaie-Zadeh and C. Jutten},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Regularized low-coherence overcomplete dictionary learning for sparse signal decomposition},\n  year = {2016},\n  pages = {369-373},\n  abstract = {This paper deals with learning an overcomplete set of atoms that have low mutual coherence. To this aim, we propose a new dictionary learning (DL) problem that enables a control on the amounts of the decomposition error and the mutual coherence of the atoms of the dictionary. Unlike existing methods, our new problem directly incorporates the mutual coherence term into the usual DL problem as a regularizer. We also propose an efficient algorithm to solve the new problem. Our new algorithm uses block coordinate descent, and updates the dictionary atom-by-atom, leading to closed-form solutions. We demonstrate the superiority of our new method over existing approaches in learning low-coherence overcomplete dictionaries for natural image patches.},\n  keywords = {compressed sensing;matrix algebra;signal representation;source separation;regularized low-coherence overcomplete dictionary learning;sparse signal decomposition;mutual coherence;dictionary atom-by-atom;learning low-coherence overcomplete dictionaries;natural image patches;Dictionaries;Signal processing algorithms;Coherence;Cost function;Atomic measurements;Europe;Signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760272},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n This paper deals with learning an overcomplete set of atoms that have low mutual coherence. To this aim, we propose a new dictionary learning (DL) problem that enables a control on the amounts of the decomposition error and the mutual coherence of the atoms of the dictionary. Unlike existing methods, our new problem directly incorporates the mutual coherence term into the usual DL problem as a regularizer. We also propose an efficient algorithm to solve the new problem. Our new algorithm uses block coordinate descent, and updates the dictionary atom-by-atom, leading to closed-form solutions. We demonstrate the superiority of our new method over existing approaches in learning low-coherence overcomplete dictionaries for natural image patches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Speech emotion recognition using kernel sparse representation based classifier.\n \n \n \n \n\n\n \n Sharma, P.; Abrol, V.; Sachdev, A.; and Dileep, A. D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 374-377, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SpeechPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760273,\n  author = {P. Sharma and V. Abrol and A. Sachdev and A. D. Dileep},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Speech emotion recognition using kernel sparse representation based classifier},\n  year = {2016},\n  pages = {374-377},\n  abstract = {In this paper, we propose to use a kernel sparse representation based classifier (KSRC) for the task of speech emotion recognition. Further, the recognition performance using the KSRC is improved by imposing a group sparsity constraint. The speech utterances with same emotion may have different duration, but the frame sequence information does not play a crucial role in this task. Hence, in this work, we propose to use dynamic kernels which explicitly models the variability in duration of speech signals. Experimental results demonstrate that, given a suitable kernel, KSRC with group sparsity constraint performs better as compared to the state-of-the-art support vector machines (SVM) based classifiers.},\n  keywords = {emotion recognition;operating system kernels;pattern classification;speech recognition;kernel sparse representation-based classifier;KSRC;speech emotion recognition performance;group sparsity constraint;speech utterance;Kernel;Speech;Speech recognition;Training;Emotion recognition;Dictionaries;Support vector machines;Kernel sparse representations;group sparsity;speech emotion recognition},\n  doi = {10.1109/EUSIPCO.2016.7760273},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252017.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose to use a kernel sparse representation based classifier (KSRC) for the task of speech emotion recognition. Further, the recognition performance using the KSRC is improved by imposing a group sparsity constraint. The speech utterances with same emotion may have different duration, but the frame sequence information does not play a crucial role in this task. Hence, in this work, we propose to use dynamic kernels which explicitly models the variability in duration of speech signals. Experimental results demonstrate that, given a suitable kernel, KSRC with group sparsity constraint performs better as compared to the state-of-the-art support vector machines (SVM) based classifiers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Recovery guarantees for mixed norm ℓp1, p2 block sparse representations.\n \n \n \n\n\n \n Afdideh, F.; Phlypo, R.; and Jutten, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 378-382, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760274,\n  author = {F. Afdideh and R. Phlypo and C. Jutten},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Recovery guarantees for mixed norm ℓp1, p2 block sparse representations},\n  year = {2016},\n  pages = {378-382},\n  abstract = {In this work, we propose theoretical and algorithmic-independent recovery conditions which guarantee the uniqueness of block sparse recovery in general dictionaries through a general mixed norm optimization problem. These conditions are derived using the proposed block uncertainty principles and block null space property, based on some newly defined characterizations of block spark, and (p, p)-block mutual incoherence. We show that there is improvement in the recovery condition when exploiting the block structure of the representation. In addition, the proposed recovery condition extends the similar results for block sparse setting by generalizing the criterion for determining the active blocks, generalizing the block sparse recovery condition, and relaxing some constraints on blocks such as linear independency of the columns.},\n  keywords = {optimisation;signal representation;mixed norm ℓp1-p2 block sparse representation;algorithmic-independent recovery condition;mixed norm optimization problem;block uncertainty;block null space property;block spark characterization;block sparse recovery condition;block mutual incoherence constant;Dictionaries;Optimization;Sparks;Uncertainty;Europe;Signal processing;Signal processing algorithms;Block-sparsity;Block-sparse recovery conditions;Block Mutual Incoherence Constant (BMIC);Block Spark;Block Uncertainty Principle (BUP)},\n  doi = {10.1109/EUSIPCO.2016.7760274},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n In this work, we propose theoretical and algorithmic-independent recovery conditions which guarantee the uniqueness of block sparse recovery in general dictionaries through a general mixed norm optimization problem. These conditions are derived using the proposed block uncertainty principles and block null space property, based on some newly defined characterizations of block spark, and (p, p)-block mutual incoherence. We show that there is improvement in the recovery condition when exploiting the block structure of the representation. In addition, the proposed recovery condition extends the similar results for block sparse setting by generalizing the criterion for determining the active blocks, generalizing the block sparse recovery condition, and relaxing some constraints on blocks such as linear independency of the columns.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Video selection for visual sensor networks: A motion-based ranking algorithm.\n \n \n \n \n\n\n \n Moretti, S.; Mazzotti, M.; and Chiani, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 383-387, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"VideoPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760275,\n  author = {S. Moretti and M. Mazzotti and M. Chiani},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Video selection for visual sensor networks: A motion-based ranking algorithm},\n  year = {2016},\n  pages = {383-387},\n  abstract = {A Visual Sensor Network (VSN) is composed by several cameras, in general with different characteristics and orientations, which are used to cover a certain Area of Interest (AoI). To provide an optimal and autonomous exploitation of the VSN video streams, suitable algorithms are needed for selecting the cameras capable to guarantee the best video quality for the specific AoI in the scene. In this work, a novel content and context-aware camera ranking algorithm is proposed, with the goal to maximize the Quality of Experience (QoE) to the final user. The proposed algorithm takes into account the pose, camera resolution and frame rate, and the quantity of motion in the scene. Subjective tests are performed to compare the ranking of the algorithm with human ranking. Finally, the proposed ranking algorithm is compared with common objective video quality metrics and a previous ranking algorithm, confirming the validity of the approach.},\n  keywords = {cameras;image motion analysis;optimisation;quality of experience;video cameras;video signal processing;video streaming;wireless sensor networks;video selection;visual sensor network;motion-based ranking algorithm;area of interest;AoI;VSN video stream;context-aware camera ranking algorithm;quality of experience maximization;QoE maximization;camera resolution;frame rate;motion quantity;Cameras;Signal processing algorithms;Three-dimensional displays;Solid modeling;Visualization;Heuristic algorithms;Optical imaging;Visual Sensor Networks;QoE;camera selection techniques;ranking algorithms},\n  doi = {10.1109/EUSIPCO.2016.7760275},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256221.pdf},\n}\n\n
\n
\n\n\n
\n A Visual Sensor Network (VSN) is composed by several cameras, in general with different characteristics and orientations, which are used to cover a certain Area of Interest (AoI). To provide an optimal and autonomous exploitation of the VSN video streams, suitable algorithms are needed for selecting the cameras capable to guarantee the best video quality for the specific AoI in the scene. In this work, a novel content and context-aware camera ranking algorithm is proposed, with the goal to maximize the Quality of Experience (QoE) to the final user. The proposed algorithm takes into account the pose, camera resolution and frame rate, and the quantity of motion in the scene. Subjective tests are performed to compare the ranking of the algorithm with human ranking. Finally, the proposed ranking algorithm is compared with common objective video quality metrics and a previous ranking algorithm, confirming the validity of the approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A low-rank and joint-sparsity model for hyper-spectral radio-interferometric imaging.\n \n \n \n \n\n\n \n Abdulaziz, A.; Dabbech, A.; Onose, A.; and Wiaux, Y.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 388-392, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760276,\n  author = {A. Abdulaziz and A. Dabbech and A. Onose and Y. Wiaux},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A low-rank and joint-sparsity model for hyper-spectral radio-interferometric imaging},\n  year = {2016},\n  pages = {388-392},\n  abstract = {With the advent of the next-generation radio-interferometric telescopes, like the Square Kilometre Array, novel signal processing methods are needed to provide the expected imaging resolution and sensitivity from extreme amounts of hyper-spectral data. In this context, we propose a generic non-parametric low-rank and joint-sparsity image model for the regularisation of the associated wide-band inverse problem. We pose a convex optimisation problem and propose the use of an efficient algorithmic solver. The proposed optimisation task requires only one tuning parameter, namely the relative weight between the low-rank and joint-sparsity constraints. Our preliminary simulations suggest superior performance of the model with respect to separate single band imaging, as well as to other recently promoted non-parametric wide-band models leveraging convex optimisation.},\n  keywords = {astronomical image processing;image resolution;inverse problems;optimisation;radiowave interferometry;low-rank and joint-sparsity model;hyper-spectral radio-interferometric imaging;next-generation radio-interferometric telescopes;square kilometre array;hyper-spectral data;convex optimisation problem;associated wide-band inverse problem;single band imaging;convex optimisation;Imaging;Frequency measurement;Minimization;Optimization;Dictionaries;Context;Brightness;hyper-spectral image processing;radio-interferometry},\n  doi = {10.1109/EUSIPCO.2016.7760276},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256536.pdf},\n}\n\n
\n
\n\n\n
\n With the advent of the next-generation radio-interferometric telescopes, like the Square Kilometre Array, novel signal processing methods are needed to provide the expected imaging resolution and sensitivity from extreme amounts of hyper-spectral data. In this context, we propose a generic non-parametric low-rank and joint-sparsity image model for the regularisation of the associated wide-band inverse problem. We pose a convex optimisation problem and propose the use of an efficient algorithmic solver. The proposed optimisation task requires only one tuning parameter, namely the relative weight between the low-rank and joint-sparsity constraints. Our preliminary simulations suggest superior performance of the model with respect to separate single band imaging, as well as to other recently promoted non-parametric wide-band models leveraging convex optimisation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust scoring of voice exercises in computer-based speech therapy systems.\n \n \n \n \n\n\n \n Diogo, M.; Eskenazi, M.; Magalhães, J.; and Cavaco, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 393-397, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760277,\n  author = {M. Diogo and M. Eskenazi and J. Magalhães and S. Cavaco},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust scoring of voice exercises in computer-based speech therapy systems},\n  year = {2016},\n  pages = {393-397},\n  abstract = {Speech therapy is essential to help children with speech sound disorders. While some computer tools for speech therapy have been proposed, most focus on articulation disorders. Another important aspect of speech therapy is voice quality but not much research has been developed on this issue. As a contribution to fill this gap, we propose a robust scoring model for voice exercises often used in speech therapy sessions, namely the sustained vowel and the increasing/decreasing pitch variation exercises. The models are learned with a support vector machine and double cross-validation, and obtained accuracies from approximately 73.98% to 85.93% while showing a low rate of false negatives. The learned models allow classifying the children's answers on the exercises, thus providing them with real-time feedback on their performance.},\n  keywords = {computerised instrumentation;medical computing;medical disorders;paediatrics;patient treatment;pattern classification;speech;support vector machines;robust scoring;voice exercises;computer-based speech therapy systems;children;speech sound disorders;computer tools;articulation disorders;voice quality;speech therapy sessions;pitch variation exercises;support vector machine;double cross-validation;false negatives;real-time feedback;Speech;Medical treatment;Robustness;Europe;Speech recognition;Speech processing;speech therapy;robust scoring;support vector machines;cross-validation},\n  doi = {10.1109/EUSIPCO.2016.7760277},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252308.pdf},\n}\n\n
\n
\n\n\n
\n Speech therapy is essential to help children with speech sound disorders. While some computer tools for speech therapy have been proposed, most focus on articulation disorders. Another important aspect of speech therapy is voice quality but not much research has been developed on this issue. As a contribution to fill this gap, we propose a robust scoring model for voice exercises often used in speech therapy sessions, namely the sustained vowel and the increasing/decreasing pitch variation exercises. The models are learned with a support vector machine and double cross-validation, and obtained accuracies from approximately 73.98% to 85.93% while showing a low rate of false negatives. The learned models allow classifying the children's answers on the exercises, thus providing them with real-time feedback on their performance.\n
\n\n\n
\n\n\n \n\n\n
\n \n\n \n \n \n \n \n \n Penalized linear regression for discrete ill-posed problems: A hybrid least-squares and mean-squared error approach.\n \n \n \n \n\n\n \n Suliman, M.; Ballal, T.; Kammoun, A.; and Al-Naffouri, T. Y.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 403-407, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"PenalizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760279,\n  author = {M. Suliman and T. Ballal and A. Kammoun and T. Y. Al-Naffouri},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Penalized linear regression for discrete ill-posed problems: A hybrid least-squares and mean-squared error approach},\n  year = {2016},\n  pages = {403-407},\n  abstract = {This paper proposes a new approach to find the regularization parameter for linear least-squares discrete ill-posed problems. In the proposed approach, an artificial perturbation matrix with a bounded norm is forced into the discrete ill-posed model matrix. This perturbation is introduced to enhance the singular-value (SV) structure of the matrix and hence to provide a better solution. The proposed approach is derived to select the regularization parameter in a way that minimizes the mean-squared error (MSE) of the estimator. Numerical results demonstrate that the proposed approach outperforms a set of benchmark methods in most cases when applied to different scenarios of discrete ill-posed problems. Jointly, the proposed approach enjoys the lowest run-time and offers the highest level of robustness amongst all the tested methods.},\n  keywords = {least mean squares methods;minimisation;regression analysis;signal processing;singular value decomposition;linear regression penalization;hybrid least-square-mean-squared error approach;linear least-square discrete ill-posed problem;artificial perturbation matrix;singular value structure enhancenent;SV structure enhancement;MSE minimization;Europe;Signal processing;Mathematical model;STEM;Periodic structures;Benchmark testing;Indexes;linear estimation;linear least-squares;ill-posed problem;regularization},\n  doi = {10.1109/EUSIPCO.2016.7760279},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252304.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a new approach to find the regularization parameter for linear least-squares discrete ill-posed problems. In the proposed approach, an artificial perturbation matrix with a bounded norm is forced into the discrete ill-posed model matrix. This perturbation is introduced to enhance the singular-value (SV) structure of the matrix and hence to provide a better solution. The proposed approach is derived to select the regularization parameter in a way that minimizes the mean-squared error (MSE) of the estimator. Numerical results demonstrate that the proposed approach outperforms a set of benchmark methods in most cases when applied to different scenarios of discrete ill-posed problems. Jointly, the proposed approach enjoys the lowest run-time and offers the highest level of robustness amongst all the tested methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust and rapid estimation of the parameters of harmonic signals in three phase power systems.\n \n \n \n \n\n\n \n Sun, J.; Ye, S.; and Aboutanios, E.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 408-412, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760280,\n  author = {J. Sun and S. Ye and E. Aboutanios},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust and rapid estimation of the parameters of harmonic signals in three phase power systems},\n  year = {2016},\n  pages = {408-412},\n  abstract = {We present a novel algorithm for rapid and efficient estimation of the fundamental frequency, phase and amplitude of harmonically distorted signals in balanced three-phase (3PH) power systems. The proposed algorithm exploits the harmonic structure of the signal to enhance the quality of the parameter estimates. It operates in the frequency domain, employing an efficient iterative interpolation procedure on the Fourier coefficients. The estimator has a low computational complexity, being of the same order as the fast Fourier transform (FFT) algorithm. Yet, it outperforms state-of-art high resolution parameter estimators for 3PH power system signals, especially when the available data points are limited and/or the signal to noise ratio is poor.},\n  keywords = {amplitude estimation;computational complexity;fast Fourier transforms;Fourier transforms;frequency estimation;frequency-domain analysis;interpolation;iterative methods;phase estimation;power systems;signal processing;three phase power systems;parameter estimation;fundamental frequency estimation;amplitude estimation;phase estimation;harmonically distorted signals;frequency domain;iterative interpolation procedure;Fourier coefficients;computational complexity;fast Fourier transform algorithm;FFT;3PH power system signals;data points;signal to noise ratio;Harmonic analysis;Estimation;Frequency estimation;Power system harmonics;Signal to noise ratio;Signal processing algorithms;Fundamental frequency estimation;smart grid;Fourier interpolation;three-phase power system},\n  doi = {10.1109/EUSIPCO.2016.7760280},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251845.pdf},\n}\n\n
\n
\n\n\n
\n We present a novel algorithm for rapid and efficient estimation of the fundamental frequency, phase and amplitude of harmonically distorted signals in balanced three-phase (3PH) power systems. The proposed algorithm exploits the harmonic structure of the signal to enhance the quality of the parameter estimates. It operates in the frequency domain, employing an efficient iterative interpolation procedure on the Fourier coefficients. The estimator has a low computational complexity, being of the same order as the fast Fourier transform (FFT) algorithm. Yet, it outperforms state-of-art high resolution parameter estimators for 3PH power system signals, especially when the available data points are limited and/or the signal to noise ratio is poor.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimal information ordering for sequential detection with cognitive biases.\n \n \n \n \n\n\n \n Akl, N.; and Tewfik, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 413-417, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OptimalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760281,\n  author = {N. Akl and A. Tewfik},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Optimal information ordering for sequential detection with cognitive biases},\n  year = {2016},\n  pages = {413-417},\n  abstract = {The manner and order in which data is presented to a human observer can lead the human to make dramatically different decisions. This raises a question on how to best present data to an observer to achieve the best decision-making performance and minimum adverse effects. In this paper, we present a general framework to model cognitive biases that interfere in the human decision making process. We examine the problem of ordering observations in binary sequential detection. Our treatment considers the limited cognitive effort exerted by a decision-maker and the effect of the observations along with their distributions on the stopping time and accuracy of the sequential test. The complexity of the ordering algorithm is linear in the size of the observation set. Both the average time to make a decision and the probability of decision error are minimized.},\n  keywords = {cognition;decision making;probability;signal detection;decision error probability;stopping time;binary sequential detection;human decision making process;minimum adverse effect;human observer;cognitive bias;optimal information ordering;Observers;Decision making;Europe;Signal processing;Real-time systems;Bayes methods;Complexity theory;Information Display;Decision-making;Sequential Test;Cognitive biases;Cognitive Effort},\n  doi = {10.1109/EUSIPCO.2016.7760281},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252338.pdf},\n}\n\n
\n
\n\n\n
\n The manner and order in which data is presented to a human observer can lead the human to make dramatically different decisions. This raises a question on how to best present data to an observer to achieve the best decision-making performance and minimum adverse effects. In this paper, we present a general framework to model cognitive biases that interfere in the human decision making process. We examine the problem of ordering observations in binary sequential detection. Our treatment considers the limited cognitive effort exerted by a decision-maker and the effect of the observations along with their distributions on the stopping time and accuracy of the sequential test. The complexity of the ordering algorithm is linear in the size of the observation set. Both the average time to make a decision and the probability of decision error are minimized.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimal quantization of TV white space regions for a broadcast based geolocation database.\n \n \n \n \n\n\n \n Maheshwari, G.; and Kumar, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 418-422, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OptimalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760282,\n  author = {G. Maheshwari and A. Kumar},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Optimal quantization of TV white space regions for a broadcast based geolocation database},\n  year = {2016},\n  pages = {418-422},\n  abstract = {Currently, TV white space databases communicate the available channels over a reliable Internet connection to the secondary devices. For places where an Internet connection is not available, such as in developing countries, we propose a broadcast based geolocation database. This proposed geolocation database will broadcast the TV white space (or the primary services protection regions) on a rate-constrained channel. In this work, the feasibility of a broadcast based geolocation database transmission will be examined over rate constrained satellite channels. To address this problem, the quantization or digital representation of primary services protection regions is considered. Due to the quantization process, any point in the protection region must not be declared as white space region. Thus, this quantization problem is different than traditional quantizers which minimize the mean-squared error. A quantizer design algorithm is the main result of this work, which minimizes the TV white space area declared as protected region due to quantization. Performance results of our quantization algorithm on US and India UHF-band protection regions will be shown. The update-rate versus bandwidth tradeoff, while using satellite TV channels, for the proposed broadcast based geolocation database will also be explored in this work.},\n  keywords = {direct broadcasting by satellite;Internet;quantisation (signal);radio spectrum management;satellite communication;wireless channels;TV white space region optimal quantization;broadcast based geolocation database;Internet connection;rate constrained satellite channel;primary service protection region digital representation;US;India;UHF-band protection region;update-rate versus bandwidth tradeoff;Quantization (signal);TV;White spaces;Databases;Geology;Transmitters;Internet;quantization (signal);cognitive radio;approximation error},\n  doi = {10.1109/EUSIPCO.2016.7760282},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255599.pdf},\n}\n\n
\n
\n\n\n
\n Currently, TV white space databases communicate the available channels over a reliable Internet connection to the secondary devices. For places where an Internet connection is not available, such as in developing countries, we propose a broadcast based geolocation database. This proposed geolocation database will broadcast the TV white space (or the primary services protection regions) on a rate-constrained channel. In this work, the feasibility of a broadcast based geolocation database transmission will be examined over rate constrained satellite channels. To address this problem, the quantization or digital representation of primary services protection regions is considered. Due to the quantization process, any point in the protection region must not be declared as white space region. Thus, this quantization problem is different than traditional quantizers which minimize the mean-squared error. A quantizer design algorithm is the main result of this work, which minimizes the TV white space area declared as protected region due to quantization. Performance results of our quantization algorithm on US and India UHF-band protection regions will be shown. The update-rate versus bandwidth tradeoff, while using satellite TV channels, for the proposed broadcast based geolocation database will also be explored in this work.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n High rate quantization analysis for a class of finite rate of innovation signals.\n \n \n \n \n\n\n \n Jayawant, A.; and Kumar, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 423-427, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"HighPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760283,\n  author = {A. Jayawant and A. Kumar},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {High rate quantization analysis for a class of finite rate of innovation signals},\n  year = {2016},\n  pages = {423-427},\n  abstract = {Acquisition and perfect reconstruction of finite rate of innovation (FRI) signals was proposed first by Vetterli, Marziliano, and Blu [1]. To the best of our knowledge, the stability of their reconstruction procedure in the presence of scalar quantizers has not been addressed in the literature. For periodic stream of Dirac FRI signal, which is an important subclass of FRI signals, the stability of reconstruction when quantization is introduced on acquired samples is analyzed in this work. It is shown that the parameters of stream of Diracs can be obtained with error O(ε), where ε is the per sample quantization error. This result holds in the high-rate quantization regime when ε is sufficiently small.},\n  keywords = {quantisation (signal);signal reconstruction;finite rate of innovation signal reconstruction;FRI signal reconstruction;high-rate quantization analysis;scalar quantizer;quantization error;Quantization (signal);Fourier series;Reconstruction algorithms;Europe;Technological innovation;Stability analysis;quantization (signal);signal sampling;signal reconstruction;signal analysis},\n  doi = {10.1109/EUSIPCO.2016.7760283},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255638.pdf},\n}\n\n
\n
\n\n\n
\n Acquisition and perfect reconstruction of finite rate of innovation (FRI) signals was proposed first by Vetterli, Marziliano, and Blu [1]. To the best of our knowledge, the stability of their reconstruction procedure in the presence of scalar quantizers has not been addressed in the literature. For periodic stream of Dirac FRI signal, which is an important subclass of FRI signals, the stability of reconstruction when quantization is introduced on acquired samples is analyzed in this work. It is shown that the parameters of stream of Diracs can be obtained with error O(ε), where ε is the per sample quantization error. This result holds in the high-rate quantization regime when ε is sufficiently small.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the existence of the band-limited interpolation of non-band-limited signals.\n \n \n \n \n\n\n \n Boche, H.; and Tampubolon, E.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 428-432, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760284,\n  author = {H. Boche and E. Tampubolon},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On the existence of the band-limited interpolation of non-band-limited signals},\n  year = {2016},\n  pages = {428-432},\n  abstract = {The distribution theory serves as an important theoretical foundation for some approaches arose from the engineering intuition. Particular examples are approaches based on the delta-“function”. In this work, we show that the usual construction of a band-limited interpolation (BLI) of signals “vanishing” at infinity (e.g., in [1], [2]), using the delta-“function”, is erroneous, both in the distributional sense and in the tempered distributional sense. The latter sense is in particular important for analyzing the frequency behaviour of that method - the aliasing error and the truncation error. Furthermore, we show that it is possible to construct a BLI without using the delta-“function”. This can in particular be done easily for the space of signals having integrable frequencies. If one consider another notion of band-limited functions, a BLI can even be given for the space of continuous signals “vanishing” at infinity. For the space of continuous signals, we answer the question whether there exists a BLI negatively.},\n  keywords = {interpolation;signal processing;statistical distributions;nonband-limited signal band-limited interpolation;distribution theory;BLI;delta-function;aliasing error;truncation error;continuous signal space;Interpolation;Signal processing;Convergence;Europe;Standards;Functional analysis;Fourier transforms;Band-limited interpolation;Band-limited signals;(Tempered) Distributions;Sampling;Divergence},\n  doi = {10.1109/EUSIPCO.2016.7760284},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255671.pdf},\n}\n\n
\n
\n\n\n
\n The distribution theory serves as an important theoretical foundation for some approaches arose from the engineering intuition. Particular examples are approaches based on the delta-“function”. In this work, we show that the usual construction of a band-limited interpolation (BLI) of signals “vanishing” at infinity (e.g., in [1], [2]), using the delta-“function”, is erroneous, both in the distributional sense and in the tempered distributional sense. The latter sense is in particular important for analyzing the frequency behaviour of that method - the aliasing error and the truncation error. Furthermore, we show that it is possible to construct a BLI without using the delta-“function”. This can in particular be done easily for the space of signals having integrable frequencies. If one consider another notion of band-limited functions, a BLI can even be given for the space of continuous signals “vanishing” at infinity. For the space of continuous signals, we answer the question whether there exists a BLI negatively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Positive trigonometric polynomials and one-dimensional discrete phase retrieval problem.\n \n \n \n \n\n\n \n Rusu, C.; and Astola, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 433-437, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"PositivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760285,\n  author = {C. Rusu and J. Astola},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Positive trigonometric polynomials and one-dimensional discrete phase retrieval problem},\n  year = {2016},\n  pages = {433-437},\n  abstract = {In this paper some results on Schur transform are reviewed to address the problem of one-dimensional discrete phase retrieval. The goal is to provide a test whether a sequence of input magnitude data gives a solution to one-dimensional discrete phase retrieval problem. It has been previously shown that this issue is related to the nonnegativity of trigonometric polynomials. The proposed method is similar to the table procedure for counting the multiplicities of zeros on unit circle. Examples and numerical results are also provided to indicate that the problem of one-dimensional discrete phase retrieval often does not have a solution.},\n  keywords = {polynomials;signal processing;transforms;positive trigonometric polynomials;one-dimensional discrete phase retrieval problem;Schur transform;table procedure;Signal processing;Europe;Discrete Fourier transforms;Correlation;Signal processing algorithms},\n  doi = {10.1109/EUSIPCO.2016.7760285},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255687.pdf},\n}\n\n
\n
\n\n\n
\n In this paper some results on Schur transform are reviewed to address the problem of one-dimensional discrete phase retrieval. The goal is to provide a test whether a sequence of input magnitude data gives a solution to one-dimensional discrete phase retrieval problem. It has been previously shown that this issue is related to the nonnegativity of trigonometric polynomials. The proposed method is similar to the table procedure for counting the multiplicities of zeros on unit circle. Examples and numerical results are also provided to indicate that the problem of one-dimensional discrete phase retrieval often does not have a solution.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multiplicative update for a class of constrained optimization problems related to NMF and its global convergence.\n \n \n \n \n\n\n \n Takahashi, N.; and Seki, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 438-442, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MultiplicativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760286,\n  author = {N. Takahashi and M. Seki},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Multiplicative update for a class of constrained optimization problems related to NMF and its global convergence},\n  year = {2016},\n  pages = {438-442},\n  abstract = {Multiplicative updates are widely used for nonnegative matrix factorization (NMF) as an efficient computational method. In this paper, we consider a class of constrained optimization problems in which a polynomial function of the product of two matrices is minimized subject to the nonnegativity constraints. These problems are closely related to NMF because the polynomial function covers many error function used for NMF. We first derive a multiplicative update rule for those problems by using the unified method developed by Yang and Oja. We next prove that a modified version of the update rule has the global convergence property in the sense of Zangwill under certain conditions. This result can be applied to many existing multiplicative update rules for NMF to guarantee their global convergence.},\n  keywords = {convergence;matrix decomposition;matrix multiplication;optimisation;polynomials;multiplicative update;constrained optimization problems;global convergence;nonnegative matrix factorization;NMF;polynomial function;Convergence;Optimization;Euclidean distance;Europe;Signal processing;Linear programming;Matrix decomposition},\n  doi = {10.1109/EUSIPCO.2016.7760286},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255730.pdf},\n}\n\n
\n
\n\n\n
\n Multiplicative updates are widely used for nonnegative matrix factorization (NMF) as an efficient computational method. In this paper, we consider a class of constrained optimization problems in which a polynomial function of the product of two matrices is minimized subject to the nonnegativity constraints. These problems are closely related to NMF because the polynomial function covers many error function used for NMF. We first derive a multiplicative update rule for those problems by using the unified method developed by Yang and Oja. We next prove that a modified version of the update rule has the global convergence property in the sense of Zangwill under certain conditions. This result can be applied to many existing multiplicative update rules for NMF to guarantee their global convergence.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian unscented Kalman filter for state estimation of nonlinear and non-Gaussian systems.\n \n \n \n \n\n\n \n Liu, Z.; Chan, S.; Wu, H.; and Wu, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 443-447, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760287,\n  author = {Z. Liu and S. Chan and H. Wu and J. Wu},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian unscented Kalman filter for state estimation of nonlinear and non-Gaussian systems},\n  year = {2016},\n  pages = {443-447},\n  abstract = {This paper proposes a Bayesian unscented Kalman filter with simplified Gaussian mixtures (BUKF-SGM) for dynamic state space estimation of nonlinear and non-Gaussian systems. In the BUKF-SGM, the state and noise densities are approximated as finite Gaussian mixtures, in which the mean and covariance for each component are recursively estimated using the UKF. To avoid the exponential growth of mixture components, a Gaussian mixture simplification algorithm is employed to reduce the number of mixture components, which leads to lower complexity in comparing with conventional resampling and clustering techniques. Experimental results show that the proposed BUKF-SGM can achieve better performance compared with the particle filter (PF)-based algorithms. This provides an attractive alternative for nonlinear state estimation problem.},\n  keywords = {Bayes methods;Gaussian processes;Kalman filters;mixture models;nonlinear filters;nonlinear systems;recursive estimation;state estimation;state-space methods;Bayesian unscented Kalman filter;nonlinear system;nonGaussian system;simplified Gaussian mixture;dynamic state space estimation;BUKF-SGM;state density;noise density;recursive estimation;Gaussian mixture simplification algorithm;mixture component number reduction;State estimation;Signal processing algorithms;Kalman filters;Complexity theory;Approximation algorithms;Bayes methods;Bayesian unscented Kalman filter;dynamic state estimation;nonlinear and non-Gaussian system;Gaussian mixture;particle filter},\n  doi = {10.1109/EUSIPCO.2016.7760287},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256064.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a Bayesian unscented Kalman filter with simplified Gaussian mixtures (BUKF-SGM) for dynamic state space estimation of nonlinear and non-Gaussian systems. In the BUKF-SGM, the state and noise densities are approximated as finite Gaussian mixtures, in which the mean and covariance for each component are recursively estimated using the UKF. To avoid the exponential growth of mixture components, a Gaussian mixture simplification algorithm is employed to reduce the number of mixture components, which leads to lower complexity in comparing with conventional resampling and clustering techniques. Experimental results show that the proposed BUKF-SGM can achieve better performance compared with the particle filter (PF)-based algorithms. This provides an attractive alternative for nonlinear state estimation problem.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n In-network adaptive cluster enumeration for distributed classification and labeling.\n \n \n \n \n\n\n \n Teklehaymanot, F. K.; Muma, M.; Liu, J.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 448-452, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"In-networkPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760288,\n  author = {F. K. Teklehaymanot and M. Muma and J. Liu and A. M. Zoubir},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {In-network adaptive cluster enumeration for distributed classification and labeling},\n  year = {2016},\n  pages = {448-452},\n  abstract = {A crucial first step for signal processing decentralized sensor networks with node-specific interests is to agree upon a common unique labeling of all observed sources in the network. The knowledge “who observes what” is required, e.g. in node-specific audio or video signal enhancement to form node clusters of common interest. Recently proposed in-network distributed adaptive classification and labeling algorithms assume knowledge on the number of objects (clusters), which is not necessarily available in real-world applications. Thus, we consider the problem of estimating the number of data-clusters in the distributed adaptive network set-up. We propose two distributed adaptive cluster enumeration methods. They combine the diffusion principle, where the nodes share information within their local neighborhood only (without fusion center), with the X-means and the PG-means cluster enumeration. Performance is evaluated via simulations and the applicability of the methods is illustrated using a distributed camera network where moving objects appear and disappear from the Line-of-Sight (LOS) and the number of clusters becomes time-varying.},\n  keywords = {audio signal processing;distributed sensors;video signal processing;distributed camera network;distributed adaptive cluster enumeration methods;video signal enhancement;audio signal enhancement;signal processing decentralized sensor networks;distributed classification;in-network adaptive cluster enumeration;Cameras;Clustering algorithms;Signal processing;Labeling;Signal processing algorithms;Convergence;Image color analysis;Distributed Cluster Enumeration;Distributed Classification;Object Labeling;Camera Network;X-means;PG-means;MDMT;Diffusion},\n  doi = {10.1109/EUSIPCO.2016.7760288},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256129.pdf},\n}\n\n
\n
\n\n\n
\n A crucial first step for signal processing decentralized sensor networks with node-specific interests is to agree upon a common unique labeling of all observed sources in the network. The knowledge “who observes what” is required, e.g. in node-specific audio or video signal enhancement to form node clusters of common interest. Recently proposed in-network distributed adaptive classification and labeling algorithms assume knowledge on the number of objects (clusters), which is not necessarily available in real-world applications. Thus, we consider the problem of estimating the number of data-clusters in the distributed adaptive network set-up. We propose two distributed adaptive cluster enumeration methods. They combine the diffusion principle, where the nodes share information within their local neighborhood only (without fusion center), with the X-means and the PG-means cluster enumeration. Performance is evaluated via simulations and the applicability of the methods is illustrated using a distributed camera network where moving objects appear and disappear from the Line-of-Sight (LOS) and the number of clusters becomes time-varying.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On poisson compressed sensing and parameter estimation in sheet-of-light surface scanning.\n \n \n \n \n\n\n \n Varatharaajan, S.; Römer, F.; Kostka, G.; Keil, F.; Uhrmann, F.; and Del Galdo, G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 453-457, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760289,\n  author = {S. Varatharaajan and F. Römer and G. Kostka and F. Keil and F. Uhrmann and G. {Del Galdo}},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On poisson compressed sensing and parameter estimation in sheet-of-light surface scanning},\n  year = {2016},\n  pages = {453-457},\n  abstract = {Compressed Sensing (CS) has been successfully applied in a number of imaging systems since it can fundamentally increase frame rates and/or the resolution. In this paper, we apply CS to 3-D surface acquisition using Sheet-of-Light (SOL) scanning. The application of CS could potentially increase the speed of the measurement and/or enhance scan resolution with fewer measurements. To analyze the potential performance of a CS-SOL system, we formulate the estimation of the height profile of a target object as a compressive parameter estimation problem and investigate the achievable estimation accuracy in the presence of noise. In the context of compressed sensing, measurement models with AWGN are typically analyzed. However, in imaging applications there are multiple noise sources giving rise to different statistical noise models in which Poisson noise can be the dominating noise source. This is particularly true for photon-counting detectors that are used in low light settings. Therefore, in this paper we focus on the compressive parameter estimation problem in presence of Poisson distributed photon noise. The achievable estimation accuracy in obtaining height profiles from compressed observations is systematically analyzed with the help of the Cramer-Rao Lower Bound (CRLB). This analysis allows us to compare different CS measurement strategies and quantify the parameter estimation accuracy as a function of system parameters such as the compression ratio, exposure time, image size, etc.},\n  keywords = {AWGN;compressed sensing;parameter estimation;Poisson distribution;Poisson compressed sensing;sheet-of-light surface scanning;imaging system;3-D surface acquisition;SOL scanning;scan resolution;CS-SOL system;compressive parameter estimation problem;AWGN;multiple noise source;statistical noise model;photon-counting detector;Poisson distributed photon noise;Cramer-Rao lower bound;CRLB;Photonics;Laser noise;Measurement by laser beam;Image coding;Parameter estimation;Estimation;AWGN},\n  doi = {10.1109/EUSIPCO.2016.7760289},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255783.pdf},\n}\n\n
\n
\n\n\n
\n Compressed Sensing (CS) has been successfully applied in a number of imaging systems since it can fundamentally increase frame rates and/or the resolution. In this paper, we apply CS to 3-D surface acquisition using Sheet-of-Light (SOL) scanning. The application of CS could potentially increase the speed of the measurement and/or enhance scan resolution with fewer measurements. To analyze the potential performance of a CS-SOL system, we formulate the estimation of the height profile of a target object as a compressive parameter estimation problem and investigate the achievable estimation accuracy in the presence of noise. In the context of compressed sensing, measurement models with AWGN are typically analyzed. However, in imaging applications there are multiple noise sources giving rise to different statistical noise models in which Poisson noise can be the dominating noise source. This is particularly true for photon-counting detectors that are used in low light settings. Therefore, in this paper we focus on the compressive parameter estimation problem in presence of Poisson distributed photon noise. The achievable estimation accuracy in obtaining height profiles from compressed observations is systematically analyzed with the help of the Cramer-Rao Lower Bound (CRLB). This analysis allows us to compare different CS measurement strategies and quantify the parameter estimation accuracy as a function of system parameters such as the compression ratio, exposure time, image size, etc.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Measurement matrix design for compressed sensing based time delay estimation.\n \n \n \n \n\n\n \n Roemer, F.; Ibrahim, M.; Franke, N.; Hadaschik, N.; Eidloth, A.; Sackenreuter, B.; and Del Galdo, G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 458-462, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MeasurementPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760290,\n  author = {F. Roemer and M. Ibrahim and N. Franke and N. Hadaschik and A. Eidloth and B. Sackenreuter and G. {Del Galdo}},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Measurement matrix design for compressed sensing based time delay estimation},\n  year = {2016},\n  pages = {458-462},\n  abstract = {In this paper we study the problem of estimating the unknown delay(s) in a system where we receive a linear combination of several delayed copies of a known transmitted waveform. This problem arises in many applications such as timing-based localization or wireless synchronization. Since accurate delay estimation requires wideband signals, traditional systems need high-speed AD converters which poses a significant burden on the hardware implementation. Compressive sensing (CS) based system architectures that take measurements at rates significantly below the Nyquist rate and yet achieve accurate delay estimation have been proposed with the goal to alleviate the hardware complexity. In this paper, we particularly discuss the design of the measurement kernels based on a frequency-domain representation and show numerically that an optimized choice can outperform randomly chosen functionals in terms of the delay estimation accuracy.},\n  keywords = {compressed sensing;delay estimation;frequency-domain analysis;measurement matrix design;compressed sensing;time delay estimation;measurement kernels;frequency-domain representation;delay estimation accuracy;Receivers;Kernel;Estimation;Synchronization;Delay estimation;Correlation;Compressive sensing;synchronization;delay estimation;measurement matrix design},\n  doi = {10.1109/EUSIPCO.2016.7760290},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255838.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we study the problem of estimating the unknown delay(s) in a system where we receive a linear combination of several delayed copies of a known transmitted waveform. This problem arises in many applications such as timing-based localization or wireless synchronization. Since accurate delay estimation requires wideband signals, traditional systems need high-speed AD converters which poses a significant burden on the hardware implementation. Compressive sensing (CS) based system architectures that take measurements at rates significantly below the Nyquist rate and yet achieve accurate delay estimation have been proposed with the goal to alleviate the hardware complexity. In this paper, we particularly discuss the design of the measurement kernels based on a frequency-domain representation and show numerically that an optimized choice can outperform randomly chosen functionals in terms of the delay estimation accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adapt-Align-Combine for diffusion-based distributed dictionary learning.\n \n \n \n \n\n\n \n Ampeliotis, D.; Mavrokefalidis, C.; Berberidis, K.; and Theodoridis, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 463-467, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Adapt-Align-CombinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760291,\n  author = {D. Ampeliotis and C. Mavrokefalidis and K. Berberidis and S. Theodoridis},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Adapt-Align-Combine for diffusion-based distributed dictionary learning},\n  year = {2016},\n  pages = {463-467},\n  abstract = {Diffusion-based distributed dictionary learning methods are studied in this work. We consider the classical mixed l2-l1 cost function, that employs an l2 representation error term and an l1 sparsity promoting regularizer. First, we observe that this cost function suffers from an inherent permutation ambiguity. This ambiguity may deteriorate significantly the performance of diffusion-based schemes, since the involved combination step may combine different atoms even when the same atoms exist at all dictionaries. Thus, we propose to align the dictionaries prior to the combination step. Furthermore, we define a new problem, that we call the node-specific distributed dictionary learning problem. The proposed Adapt-Align-Combine algorithm enjoys increased convergence rate as compared with a scheme that does not align the dictionaries prior to the combination. Simulation results support our findings.},\n  keywords = {convergence;signal processing;signal representation;diffusion-based distributed dictionary learning method;classical mixed l2-l1 cost function;l2 representation error;l1 sparsity;cost function;inherent permutation ambiguity;adapt-align-combine algorithm;convergence rate;Dictionaries;Cost function;Europe;Signal processing algorithms;Sparse matrices;Distributed databases},\n  doi = {10.1109/EUSIPCO.2016.7760291},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255926.pdf},\n}\n\n
\n
\n\n\n
\n Diffusion-based distributed dictionary learning methods are studied in this work. We consider the classical mixed l2-l1 cost function, that employs an l2 representation error term and an l1 sparsity promoting regularizer. First, we observe that this cost function suffers from an inherent permutation ambiguity. This ambiguity may deteriorate significantly the performance of diffusion-based schemes, since the involved combination step may combine different atoms even when the same atoms exist at all dictionaries. Thus, we propose to align the dictionaries prior to the combination step. Furthermore, we define a new problem, that we call the node-specific distributed dictionary learning problem. The proposed Adapt-Align-Combine algorithm enjoys increased convergence rate as compared with a scheme that does not align the dictionaries prior to the combination. Simulation results support our findings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On quantized compressed sensing with saturated measurements via convex optimization.\n \n \n \n \n\n\n \n Elleuch, I.; Abdelkefi, F.; Siala, M.; Hamila, R.; and Al-Dhahir, N.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 468-472, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760292,\n  author = {I. Elleuch and F. Abdelkefi and M. Siala and R. Hamila and N. Al-Dhahir},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On quantized compressed sensing with saturated measurements via convex optimization},\n  year = {2016},\n  pages = {468-472},\n  abstract = {In this paper, we address the problem of sparse signal recovery, from multi-bit scalar quantized compressed sensing measurements, where the saturation issue is taken into account. We propose a convex optimization approach, where saturation errors are jointly estimated with the sparse signal to be recovered. In the proposed approach, saturated measurements, even though over-identified, are considered as outliers and the associated errors are handled as non-negative sparse corruptions with partial support information. We highlight the theoretical recovery guarantee of the proposed approach and we demonstrate, via simulation results, its reliability in cancelling out the effect of the outlying saturated measurements.},\n  keywords = {compressed sensing;convex programming;quantisation (signal);partial support information;nonnegative sparse corruptions;saturation errors;convex optimization approach;multibit scalar quantized compressed sensing measurements;sparse signal recovery problem;saturated measurements;Convex functions;Noise measurement;Quantization (signal);Robustness;Europe;Compressed sensing;Multi-Bit Quantized Compressed Sensing;Saturation;Sparse Corruptions;Sign Constraint;Convex Optimization},\n  doi = {10.1109/EUSIPCO.2016.7760292},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255933.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we address the problem of sparse signal recovery, from multi-bit scalar quantized compressed sensing measurements, where the saturation issue is taken into account. We propose a convex optimization approach, where saturation errors are jointly estimated with the sparse signal to be recovered. In the proposed approach, saturated measurements, even though over-identified, are considered as outliers and the associated errors are handled as non-negative sparse corruptions with partial support information. We highlight the theoretical recovery guarantee of the proposed approach and we demonstrate, via simulation results, its reliability in cancelling out the effect of the outlying saturated measurements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Quasi-sparsest solutions for quantized compressed sensing by graduated-non-convexity based reweighted ℓ1 minimization.\n \n \n \n\n\n \n Elleuch, I.; Abdelkefi, F.; Siala, M.; Hamila, R.; and Al-Dhahir, N.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 473-477, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760293,\n  author = {I. Elleuch and F. Abdelkefi and M. Siala and R. Hamila and N. Al-Dhahir},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Quasi-sparsest solutions for quantized compressed sensing by graduated-non-convexity based reweighted ℓ1 minimization},\n  year = {2016},\n  pages = {473-477},\n  abstract = {In this paper, we address the problem of sparse signal recovery from scalar quantized compressed sensing measurements, via optimization. To compensate for compression losses due to dimensionality reduction and quantization, we consider a cost function that is more sparsity-inducing than the commonly used ℓ1-norm. Besides, we enforce a quantization consistency constraint that naturally handles the saturation issue. We investigate the potential of the recent Graduated-Non-Convexity based reweighted ℓ1-norm minimization for sparse recovery over polyhedral sets. We demonstrate by simulations, the robustness of the proposed approach towards saturation and its significant performance gain, in terms of reconstruction accuracy and support recovery capability.},\n  keywords = {compressed sensing;concave programming;data compression;minimisation;quantisation (signal);set theory;signal reconstruction;quasisparsest solution;graduated-nonconvexity based reweighted l1 minimization;sparse signal recovery problem;scalar quantized compressed sensing measurement;optimization;compression loss compensation;dimensionality reduction;cost function;quantization consistency constraint;polyhedral set;reconstruction accuracy;Minimization;Quantization (signal);Cost function;Europe;Compressed sensing;Quantized Compressed Sensing;Concave Approximation;Graduated-Non-Convexity;Reweighted ℓ1;Support Recovery},\n  doi = {10.1109/EUSIPCO.2016.7760293},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n In this paper, we address the problem of sparse signal recovery from scalar quantized compressed sensing measurements, via optimization. To compensate for compression losses due to dimensionality reduction and quantization, we consider a cost function that is more sparsity-inducing than the commonly used ℓ1-norm. Besides, we enforce a quantization consistency constraint that naturally handles the saturation issue. We investigate the potential of the recent Graduated-Non-Convexity based reweighted ℓ1-norm minimization for sparse recovery over polyhedral sets. We demonstrate by simulations, the robustness of the proposed approach towards saturation and its significant performance gain, in terms of reconstruction accuracy and support recovery capability.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Diversity extraction for multicarrier Continuous-Variable Quantum Key Distribution.\n \n \n \n \n\n\n \n Gyongyosi, L.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 478-482, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DiversityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760294,\n  author = {L. Gyongyosi},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Diversity extraction for multicarrier Continuous-Variable Quantum Key Distribution},\n  year = {2016},\n  pages = {478-482},\n  abstract = {We introduce a diversity extraction for multicarrier continuous-variable (CV) quantum key distribution (QKD). The diversity extraction utilizes the resources that are injected into the transmission by the additional degrees of freedom of the multicarrier modulation. The multicarrier scheme granulates the information into Gaussian subcarrier CVs and divides the physical link into several Gaussian sub-channels for the transmission. We prove that the exploitable extra degree of freedom in a multicarrier CVQKD scenario significantly extends the possibilities of single-carrier CVQKD. The diversity extraction allows for the parties to reach decreased error probabilities by utilizing those extra resources of a multicarrier transmission that are not available in a single-carrier CVQKD setting. The additional resources of multicarrier CVQKD allow the achievement of significant performance improvements that are particularly crucial in an experimental scenario.},\n  keywords = {diversity reception;error statistics;modulation;quantum cryptography;diversity extraction;multicarrier continuous-variable quantum key distribution;multicarrier modulation;Gaussian subcarrier CV;error probabilities;Modulation;Europe;Error probability;Quantum mechanics;Protocols;Noise measurement;quantum cryptography;quantum key distribution;continuous-variables;quantum Shannon theory},\n  doi = {10.1109/EUSIPCO.2016.7760294},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255869.pdf},\n}\n\n
\n
\n\n\n
\n We introduce a diversity extraction for multicarrier continuous-variable (CV) quantum key distribution (QKD). The diversity extraction utilizes the resources that are injected into the transmission by the additional degrees of freedom of the multicarrier modulation. The multicarrier scheme granulates the information into Gaussian subcarrier CVs and divides the physical link into several Gaussian sub-channels for the transmission. We prove that the exploitable extra degree of freedom in a multicarrier CVQKD scenario significantly extends the possibilities of single-carrier CVQKD. The diversity extraction allows for the parties to reach decreased error probabilities by utilizing those extra resources of a multicarrier transmission that are not available in a single-carrier CVQKD setting. The additional resources of multicarrier CVQKD allow the achievement of significant performance improvements that are particularly crucial in an experimental scenario.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimal wavelength allocation in hybrid quantum-classical networks.\n \n \n \n \n\n\n \n Bahrani, S.; Razavi, M.; and Salehi, J. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 483-487, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OptimalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760295,\n  author = {S. Bahrani and M. Razavi and J. A. Salehi},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Optimal wavelength allocation in hybrid quantum-classical networks},\n  year = {2016},\n  pages = {483-487},\n  abstract = {An efficient method for optimal allocation of wavelengths in a hybrid dense-wavelength-division-multiplexing system, carrying both quantum and classical data, is proposed. The transmission of quantum bits alongside intense classical signals on the same fiber faces major challenges arising from the background noise generated by classical channels. Raman scattering, in particular, is shown to have detrimental effects on the performance of quantum key distribution systems. Here, by using a nearly optimal wavelength allocation technique, we minimize the Raman induced background noise on quantum channels, hence maximize the achievable secret key generation rate for quantum channels. It turns out the conventional solution that would involve splitting the spectrum into only two bands, one for quantum and one for classical channels, is only a suboptimal one. We show that, in our optimal arrangement, we might need several quantum and classical bands interspersed among each other.},\n  keywords = {quantum cryptography;Raman spectra;wavelength assignment;wavelength division multiplexing;optimal wavelength allocation technique;hybrid quantum-classical network;hybrid dense-wavelength-division-multiplexing system;quantum bit transmission;background noise;Raman scattering;quantum key distribution system;quantum channel;secret key generation rate;Photonics;Noise measurement;Wavelength division multiplexing;Wavelength assignment;Resource management;Receivers;Optimization},\n  doi = {10.1109/EUSIPCO.2016.7760295},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256324.pdf},\n}\n\n
\n
\n\n\n
\n An efficient method for optimal allocation of wavelengths in a hybrid dense-wavelength-division-multiplexing system, carrying both quantum and classical data, is proposed. The transmission of quantum bits alongside intense classical signals on the same fiber faces major challenges arising from the background noise generated by classical channels. Raman scattering, in particular, is shown to have detrimental effects on the performance of quantum key distribution systems. Here, by using a nearly optimal wavelength allocation technique, we minimize the Raman induced background noise on quantum channels, hence maximize the achievable secret key generation rate for quantum channels. It turns out the conventional solution that would involve splitting the spectrum into only two bands, one for quantum and one for classical channels, is only a suboptimal one. We show that, in our optimal arrangement, we might need several quantum and classical bands interspersed among each other.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Second generation QKD system over commercial fibers.\n \n \n \n \n\n\n \n Bacsardi, L.; Kis, Z.; and Imre, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 488-492, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SecondPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760296,\n  author = {L. Bacsardi and Z. Kis and S. Imre},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Second generation QKD system over commercial fibers},\n  year = {2016},\n  pages = {488-492},\n  abstract = {Security of the communications could be ensured using cryptography protocols. While asymmetrical protocols can be cracked using quantum computers, the symmetrical protocols stand against quantum attacks. However, the keys need to be exchanged in a secure way. A method is offered by the quantum key distribution (QKD) protocols. The QKD system should operate close to the fundamental quantum noise level. A possible attack is to steal some photons during the communication, but every change will result in some excess noise. This is why it is important to know the base noise level of the system, and every possible solutions need to be considered to reduce the noise of the system. To foster the research in this field, we started to develop a second generation QKD system over a 16 km long single mode, ordinary telecommunications fiber and focused on noise reduction.},\n  keywords = {cryptographic protocols;quantum cryptography;quantum noise;noise reduction;ordinary telecommunications fiber;base noise level;quantum noise level;QKD protocols;quantum key distribution protocols;quantum computers;asymmetrical protocols;cryptography protocols;communication security;commercial fibers;second generation QKD system;distance 16 km;Protocols;Photonics;Optical fiber communication;Optical interferometry;Quantum mechanics;Optical fiber polarization;CV-QKD;noise reduction;secure communication},\n  doi = {10.1109/EUSIPCO.2016.7760296},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256337.pdf},\n}\n\n
\n
\n\n\n
\n Security of the communications could be ensured using cryptography protocols. While asymmetrical protocols can be cracked using quantum computers, the symmetrical protocols stand against quantum attacks. However, the keys need to be exchanged in a secure way. A method is offered by the quantum key distribution (QKD) protocols. The QKD system should operate close to the fundamental quantum noise level. A possible attack is to steal some photons during the communication, but every change will result in some excess noise. This is why it is important to know the base noise level of the system, and every possible solutions need to be considered to reduce the noise of the system. To foster the research in this field, we started to develop a second generation QKD system over a 16 km long single mode, ordinary telecommunications fiber and focused on noise reduction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improved quantum LDPC decoding strategies for the misidentified quantum depolarization channel.\n \n \n \n \n\n\n \n Xie, Y.; Li, J.; Malaney, R.; and Yuan, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 493-497, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760297,\n  author = {Y. Xie and J. Li and R. Malaney and J. Yuan},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Improved quantum LDPC decoding strategies for the misidentified quantum depolarization channel},\n  year = {2016},\n  pages = {493-497},\n  abstract = {In this work, the importance of the channel mismatch effect in degrading the performance of deployed quantum LDPC codes has been pointed out. We help remedy this situation by proposing new quantum LDPC decoding strategies that can significantly reduce performance degradation by as much as 50%. Our new strategies for the quantum LDPC decoder are based on previous insights from classical LDPC decoders in mismatched channels, where an asymmetry in performance is known as a function of the estimated channel noise. We show how similar asymmetries carry over to the quantum depolarizing channel, and how an estimate of the depolarization flip parameter weighted to larger values leads to significant performance improvement.},\n  keywords = {channel coding;parity check codes;quantum LDPC decoding;misidentified quantum depolarization channel;channel mismatch effect;channel noise;depolarization flip parameter;Decoding;Channel estimation;Noise level;Silicon;Error correction codes;Iterative decoding},\n  doi = {10.1109/EUSIPCO.2016.7760297},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256482.pdf},\n}\n\n
\n
\n\n\n
\n In this work, the importance of the channel mismatch effect in degrading the performance of deployed quantum LDPC codes has been pointed out. We help remedy this situation by proposing new quantum LDPC decoding strategies that can significantly reduce performance degradation by as much as 50%. Our new strategies for the quantum LDPC decoder are based on previous insights from classical LDPC decoders in mismatched channels, where an asymmetry in performance is known as a function of the estimated channel noise. We show how similar asymmetries carry over to the quantum depolarizing channel, and how an estimate of the depolarization flip parameter weighted to larger values leads to significant performance improvement.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Performance evaluation of scalar reconciliation for Continuous-Variable Quantum Key Distribution.\n \n \n \n \n\n\n \n Mraz, A.; Imre, S.; and Gyongyosi, L.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 498-502, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"PerformancePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760298,\n  author = {A. Mraz and S. Imre and L. Gyongyosi},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Performance evaluation of scalar reconciliation for Continuous-Variable Quantum Key Distribution},\n  year = {2016},\n  pages = {498-502},\n  abstract = {The existing robust technique, Scalar Reconciliation combined with a Continuous-Variable Quantum Key Distribution is investigated in this paper with the help of simulations in terms of the symbol error rate. The solution contains efficient logical layer-based reconciliation for Continuous-Variable Quantum Key Distribution techniques, which extract the binary information from correlated Gaussian variables. The algorithm has been extended by different assumptions related to the raw data generation method and the segmentation of the key symbols to be transmitted. The performance of the extended algorithm has been investigated in terms of the symbol error ratio.},\n  keywords = {error statistics;performance evaluation;quantum cryptography;performance evaluation;scalar reconciliation;symbol error rate;layer-based reconciliation;continuous-variable quantum key distribution techniques;correlated Gaussian variables;data generation method;symbol error ratio;Amplitude shift keying;Quantum mechanics;Distributed databases;Gold;Europe;Continuous-Variable Quantum Key Distribution;reconciliation;Gaussian variables;quantum cryptography;simulation;symbol error rate},\n  doi = {10.1109/EUSIPCO.2016.7760298},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252345.pdf},\n}\n\n
\n
\n\n\n
\n The existing robust technique, Scalar Reconciliation combined with a Continuous-Variable Quantum Key Distribution is investigated in this paper with the help of simulations in terms of the symbol error rate. The solution contains efficient logical layer-based reconciliation for Continuous-Variable Quantum Key Distribution techniques, which extract the binary information from correlated Gaussian variables. The algorithm has been extended by different assumptions related to the raw data generation method and the segmentation of the key symbols to be transmitted. The performance of the extended algorithm has been investigated in terms of the symbol error ratio.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast disentanglement-based blind quantum source separation and process tomography using a feedforward quantum-classical adapting structure.\n \n \n \n \n\n\n \n Deville, Y.; and Deville, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 503-507, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760299,\n  author = {Y. Deville and A. Deville},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast disentanglement-based blind quantum source separation and process tomography using a feedforward quantum-classical adapting structure},\n  year = {2016},\n  pages = {503-507},\n  abstract = {Our recent investigations of blind quantum source separation and process tomography methods for Heisenberg-coupled quantum bits (qubits) were focused on introducing a new separation principle, based on output disentanglement. We here extend them by proposing a more advanced implementation of their cost function and optimization algorithm. This leads us to move from a feedback to a feedforward adapting block, which avoids potential issues related to feedback in quantum circuits. The number of quantum source state preparations required to blindly adapt the separating system is thus strongly decreased (roughly from 107 to 104), yielding much faster adaptation.},\n  keywords = {blind source separation;feedback;feedforward;optimisation;fast disentanglement-based separation;feedforward quantum-classical adapting structure;blind quantum source separation;process tomography methods;Heisenberg-coupled quantum bits;cost function;optimization algorithm;feedforward adapting block;quantum source state preparations;Cost function;Couplings;Source separation;Europe;Tomography;Signal processing algorithms},\n  doi = {10.1109/EUSIPCO.2016.7760299},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255668.pdf},\n}\n\n
\n
\n\n\n
\n Our recent investigations of blind quantum source separation and process tomography methods for Heisenberg-coupled quantum bits (qubits) were focused on introducing a new separation principle, based on output disentanglement. We here extend them by proposing a more advanced implementation of their cost function and optimization algorithm. This leads us to move from a feedback to a feedforward adapting block, which avoids potential issues related to feedback in quantum circuits. The number of quantum source state preparations required to blindly adapt the separating system is thus strongly decreased (roughly from 107 to 104), yielding much faster adaptation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fully automatic detection of anomalies on wheels surface using an adaptive accurate model and hypothesis testing theory.\n \n \n \n \n\n\n \n Tout, K.; Cogranne, R.; and Retraint, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 508-512, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"FullyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760300,\n  author = {K. Tout and R. Cogranne and F. Retraint},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Fully automatic detection of anomalies on wheels surface using an adaptive accurate model and hypothesis testing theory},\n  year = {2016},\n  pages = {508-512},\n  abstract = {This paper studies the detection of anomalies, or defects, on wheels' surface. The wheel surface is inspected using an imaging system, placed over the conveyor belt. Due to the nature of the wheels, the different elements are analyzed separately. Because many different types of wheels can be manufactured, it is proposed to detect any anomaly using a general and original adaptive linear parametric model. The adaptivity of the proposed model allows us to describe accurately the inspected wheel surface. In addition, the use of a linear parametric model allows the application of hypothesis testing theory to design a test whose statistical performances are analytically known. Numerical results show the accuracy and the relevance of the proposed methodology.},\n  keywords = {conveyors;imaging;statistical analysis;testing;wheels;automatic anomaly detection;wheels surface;adaptive accurate model;hypothesis testing theory;imaging system;conveyor belt;wheels manufacturing;adaptive linear parametric model;test designing;statistical performances;Wheels;Adaptation models;Computational modeling;Probability;Europe;Signal processing;Testing;Anomaly detection;Nondestructive testing;Adaptive image model;Hypothesis testing theory},\n  doi = {10.1109/EUSIPCO.2016.7760300},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570254960.pdf},\n}\n\n
\n
\n\n\n
\n This paper studies the detection of anomalies, or defects, on wheels' surface. The wheel surface is inspected using an imaging system, placed over the conveyor belt. Due to the nature of the wheels, the different elements are analyzed separately. Because many different types of wheels can be manufactured, it is proposed to detect any anomaly using a general and original adaptive linear parametric model. The adaptivity of the proposed model allows us to describe accurately the inspected wheel surface. In addition, the use of a linear parametric model allows the application of hypothesis testing theory to design a test whose statistical performances are analytically known. Numerical results show the accuracy and the relevance of the proposed methodology.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint spectral clustering and range estimation for 3D scene reconstruction using multispectral lidar waveforms.\n \n \n \n \n\n\n \n Altmann, Y.; Maccarone, A.; McCarthy, A.; Buller, G.; and McLaughlin, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 513-517, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760301,\n  author = {Y. Altmann and A. Maccarone and A. McCarthy and G. Buller and S. McLaughlin},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint spectral clustering and range estimation for 3D scene reconstruction using multispectral lidar waveforms},\n  year = {2016},\n  pages = {513-517},\n  abstract = {This paper presents a new Bayesian clustering method to analyse remote scenes sensed via multispectral Lidar measurements. To a first approximation, each Lidar waveform mainly consists of the temporal signature of the observed target, which depends on the wavelength of the laser source considered and which is corrupted by Poisson noise. By sensing the scene at several wavelengths, we expect a more accurate target range estimation and a more efficient spectral analysis of the scene. Thanks to its spectral classification capability, the proposed hierarchical Bayesian model, coupled with an efficient Markov chain Monte Carlo algorithm, allows the estimation of depth images together with reflectivity-based scene segmentation images. The proposed methodology is illustrated via experiments conducted with real multispectral Lidar data.},\n  keywords = {Bayes methods;image classification;image reconstruction;image segmentation;Markov processes;Monte Carlo methods;optical radar;pattern clustering;radar imaging;spectral analysis;joint spectral clustering and range estimation;3D scene reconstruction;multispectral LIDAR waveform measurement;Bayesian clustering method;temporal signature;laser source wavelength;Poisson noise;target range estimation;spectral analysis;spectral classification capability;hierarchical Bayesian model;Markov chain Monte Carlo algorithm;image depth estimation;reflectivity-based scene image segmentation;Bayes methods;Photonics;Laser radar;Estimation;Surface emitting lasers;Three-dimensional displays;Markov processes;Multispectral Lidar;Depth imaging;Bayesian estimation;Markov Chain Monte Carlo;Spectral clustering},\n  doi = {10.1109/EUSIPCO.2016.7760301},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255162.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a new Bayesian clustering method to analyse remote scenes sensed via multispectral Lidar measurements. To a first approximation, each Lidar waveform mainly consists of the temporal signature of the observed target, which depends on the wavelength of the laser source considered and which is corrupted by Poisson noise. By sensing the scene at several wavelengths, we expect a more accurate target range estimation and a more efficient spectral analysis of the scene. Thanks to its spectral classification capability, the proposed hierarchical Bayesian model, coupled with an efficient Markov chain Monte Carlo algorithm, allows the estimation of depth images together with reflectivity-based scene segmentation images. The proposed methodology is illustrated via experiments conducted with real multispectral Lidar data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Statistical models for SAR amplitude data: A unified vision through Mellin transform and Meijer functions.\n \n \n \n \n\n\n \n Nicolas, J.; and Tupin, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 518-522, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"StatisticalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760302,\n  author = {J. Nicolas and F. Tupin},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Statistical models for SAR amplitude data: A unified vision through Mellin transform and Meijer functions},\n  year = {2016},\n  pages = {518-522},\n  abstract = {In the past years, many distributions have been proposed to model SAR images. In previous works, it has been shown that Mellin transform is a powerful tool to analyse random variable products: when speckle is modelled by a Gamma distribution, and when texture can be modelled by a “classical” distribution, Mellin convolution provides analytical expressions of SAR image distribution so that parameter estimations can be processed [13], [11]. In this paper we focus on the product of probability density functions, and more specifically on the Inverse Generalized Gaussian distribution [10]. This approach has been validated in SAR image processing by Frery et al. [7]. We show that the Mellin statistics framework can provide some enlightments about this probability density function family, and can clearly link the Mellin convolution pdf family and the product pdf family. Finally, it will be shown that the Meijer functions give a unified framework for many SAR distributions so that quantitative comparisons between pdf can be achieved.},\n  keywords = {convolution;Gaussian distribution;inverse problems;radar imaging;synthetic aperture radar;transforms;SAR amplitude data;statistical model;unified vision;Mellin transform;Meijer function;random variable product analysis;Gamma distribution;Mellin convolution;SAR image texture;parameter estimation;probability density function;inverse generalized Gaussian distribution;Mellin statistics framework;product pdf family;Convolution;Transforms;Sensors;Probability density function;Erbium;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760302},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256048.pdf},\n}\n\n
\n
\n\n\n
\n In the past years, many distributions have been proposed to model SAR images. In previous works, it has been shown that Mellin transform is a powerful tool to analyse random variable products: when speckle is modelled by a Gamma distribution, and when texture can be modelled by a “classical” distribution, Mellin convolution provides analytical expressions of SAR image distribution so that parameter estimations can be processed [13], [11]. In this paper we focus on the product of probability density functions, and more specifically on the Inverse Generalized Gaussian distribution [10]. This approach has been validated in SAR image processing by Frery et al. [7]. We show that the Mellin statistics framework can provide some enlightments about this probability density function family, and can clearly link the Mellin convolution pdf family and the product pdf family. Finally, it will be shown that the Meijer functions give a unified framework for many SAR distributions so that quantitative comparisons between pdf can be achieved.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian image segmentation using hidden fields: Supervised, unsupervised, and semi-supervised formulations.\n \n \n \n \n\n\n \n Bioucas-Dias, J. M.; and Figueiredo, M. A. T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 523-527, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760303,\n  author = {J. M. Bioucas-Dias and M. A. T. Figueiredo},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian image segmentation using hidden fields: Supervised, unsupervised, and semi-supervised formulations},\n  year = {2016},\n  pages = {523-527},\n  abstract = {Segmentation is one of the central problems in image analysis, where the goal is to partition the image domain into regions exhibiting some sort of homogeneity. Most often, the partition is obtained by solving a combinatorial optimization problem, which is, in general, NP-hard. In this paper, we follow an alternative approach, using a Bayesian formulation based on a set of hidden real-valued random fields, which condition the partition. This formulation yields a continuous optimization problem, rather than a combinatorial one. In the supervised case, this problem is convex, and we tackle it with an instance of the alternating direction method of multipliers (ADMM). In the unsupervised and semi-supervised cases, the optimization problem is nonconvex, and we address it using an expectation-maximization (EM) algorithm, where the M-step is implemented via ADMM. The effectiveness and flexibility of the proposed approach is illustrated with experiments on simulated and real data.},\n  keywords = {Bayes methods;combinatorial mathematics;concave programming;convex programming;expectation-maximisation algorithm;image segmentation;unsupervised learning;Bayesian image segmentation;hidden field;supervised formulation;semisupervised formulation;unsupervised formulation;image analysis;image domain partition;combinatorial optimization problem;NP-hard;Bayesian formulation;hidden real-valued random field set;continuous optimization problem;convex problem;alternating direction method of multiplier;nonconvex optimization problem;expectation-maximization algorithm;EM algorithm;ADMM;Image segmentation;Optimization;Bayes methods;Europe;Signal processing;Computational modeling;Probability distribution;Image segmentation;hidden fields;expectation maximization;alternating direction method of multipliers (ADMM)},\n  doi = {10.1109/EUSIPCO.2016.7760303},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256362.pdf},\n}\n\n
\n
\n\n\n
\n Segmentation is one of the central problems in image analysis, where the goal is to partition the image domain into regions exhibiting some sort of homogeneity. Most often, the partition is obtained by solving a combinatorial optimization problem, which is, in general, NP-hard. In this paper, we follow an alternative approach, using a Bayesian formulation based on a set of hidden real-valued random fields, which condition the partition. This formulation yields a continuous optimization problem, rather than a combinatorial one. In the supervised case, this problem is convex, and we tackle it with an instance of the alternating direction method of multipliers (ADMM). In the unsupervised and semi-supervised cases, the optimization problem is nonconvex, and we address it using an expectation-maximization (EM) algorithm, where the M-step is implemented via ADMM. The effectiveness and flexibility of the proposed approach is illustrated with experiments on simulated and real data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparing Bayesian models in the absence of ground truth.\n \n \n \n \n\n\n \n Pereyra, M.; and McLaughlin, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 528-532, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ComparingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760304,\n  author = {M. Pereyra and S. McLaughlin},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Comparing Bayesian models in the absence of ground truth},\n  year = {2016},\n  pages = {528-532},\n  abstract = {Modern signal processing methods rely strongly on Bayesian statistical models to solve challenging problems. This paper considers the objective comparison of two alternative Bayesian models, for scenarios with no ground truth available, and with a focus on model selection. Existing model selection approaches are generally difficult to apply to signal processing because they are unsuitable for models with priors that are improper or vaguely informative, and because of challenges related to high dimensionality. This paper presents a general methodology to perform model selection for models that are high-dimensional and that involve proper, improper, or vague priors. The approach is based on an additive mixture meta-model representation that encompasses both models and which concentrates on the model that fits the data best, and relies on proximal Markov chain Monte Carlo algorithms to perform high-dimensional computations efficiently. The methodology is demonstrated on a series of experiments related to image resolution enhancement with a total-variation prior.},\n  keywords = {Bayes methods;Markov processes;Monte Carlo methods;signal processing;statistical analysis;signal processing method;Bayesian statistical models;model selection approach;additive mixture meta-model representation;proximal Markov chain Monte Carlo algorithm;Bayes methods;Computational modeling;Signal processing;Estimation;Mathematical model;Monte Carlo methods;Europe;Statistical signal processing;Bayesian inference;model selection;Markov chain Monte Carlo;computational imaging},\n  doi = {10.1109/EUSIPCO.2016.7760304},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256282.pdf},\n}\n\n
\n
\n\n\n
\n Modern signal processing methods rely strongly on Bayesian statistical models to solve challenging problems. This paper considers the objective comparison of two alternative Bayesian models, for scenarios with no ground truth available, and with a focus on model selection. Existing model selection approaches are generally difficult to apply to signal processing because they are unsuitable for models with priors that are improper or vaguely informative, and because of challenges related to high dimensionality. This paper presents a general methodology to perform model selection for models that are high-dimensional and that involve proper, improper, or vague priors. The approach is based on an additive mixture meta-model representation that encompasses both models and which concentrates on the model that fits the data best, and relies on proximal Markov chain Monte Carlo algorithms to perform high-dimensional computations efficiently. The methodology is demonstrated on a series of experiments related to image resolution enhancement with a total-variation prior.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust adaptive detection of buried pipes using GPR.\n \n \n \n \n\n\n \n Hoarau, Q.; Ginolhac, G.; Atto, A. M.; Nicolas, J. M.; and Ovarlez, J. P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 533-537, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760305,\n  author = {Q. Hoarau and G. Ginolhac and A. M. Atto and J. M. Nicolas and J. P. Ovarlez},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust adaptive detection of buried pipes using GPR},\n  year = {2016},\n  pages = {533-537},\n  abstract = {The Ground Penetrating Radar (GPR) consists in an electromagnetic signal which is transmitted at different positions through the ground in order to obtain an image of the subsoil. In particular, the GPR is used to detect buried objects like pipes. Their detection and localisation are intricate for three main reasons. First, the noise is important in the resulting image due to the presence of several rocks and/or layers. Second, the wave speed and the response of the pipe depend on the characteristics of the different layers. Finally, the signal attenuation could be important because of the depth of pipes. In this paper, we propose to derive an adaptive detector where the steering vector is parametrised by the wave speed in the ground and the noise follows a Spherically Invariant Random Vector (SIRV) distribution in order to obtain a robust detector. To estimate the covariance matrix, we propose to use robust maximum likelihood-type estimators called M-estimators. To handle the large size of data, we consider regularised versions of such M-estimators. Simulations will allow to estimate the relation Probability of False Alarm (PFA)-Threshold. Application on real datasets will show the relevancy of the proposed analysis for detecting buried objects like pipes.},\n  keywords = {buried object detection;covariance matrices;electromagnetic wave attenuation;ground penetrating radar;maximum likelihood estimation;statistical distributions;vectors;robust adaptive buried pipe detection;GPR;ground penetrating radar;electromagnetic signal;signal attenuation;steering vector;wave speed;spherically invariant random vector distribution;SIRV distribution;covariance matrix estimation;robust maximum likelihood-type estimators;M-estimators;relation probability-of-false alarm-threshold;relation PFA-threshold;Covariance matrices;Ground penetrating radar;Detectors;Buried object detection;Robustness;Synthetic aperture radar},\n  doi = {10.1109/EUSIPCO.2016.7760305},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255828.pdf},\n}\n\n
\n
\n\n\n
\n The Ground Penetrating Radar (GPR) consists in an electromagnetic signal which is transmitted at different positions through the ground in order to obtain an image of the subsoil. In particular, the GPR is used to detect buried objects like pipes. Their detection and localisation are intricate for three main reasons. First, the noise is important in the resulting image due to the presence of several rocks and/or layers. Second, the wave speed and the response of the pipe depend on the characteristics of the different layers. Finally, the signal attenuation could be important because of the depth of pipes. In this paper, we propose to derive an adaptive detector where the steering vector is parametrised by the wave speed in the ground and the noise follows a Spherically Invariant Random Vector (SIRV) distribution in order to obtain a robust detector. To estimate the covariance matrix, we propose to use robust maximum likelihood-type estimators called M-estimators. To handle the large size of data, we consider regularised versions of such M-estimators. Simulations will allow to estimate the relation Probability of False Alarm (PFA)-Threshold. Application on real datasets will show the relevancy of the proposed analysis for detecting buried objects like pipes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n MRI reconstruction with analysis sparse regularization under impulsive noise.\n \n \n \n \n\n\n \n Tanc, A. K.; and Eksioglu, E. M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 538-541, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MRIPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760306,\n  author = {A. K. Tanc and E. M. Eksioglu},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {MRI reconstruction with analysis sparse regularization under impulsive noise},\n  year = {2016},\n  pages = {538-541},\n  abstract = {We will be considering analysis sparsity based regularization for Magnetic Resonance Imaging reconstruction. The analysis sparsity regularization is based on the recently introduced Transform Learning framework, which has reduced complexity regarding other sparse regularization methods. We will formulate a variational reconstruction problem which utilizes the analysis sparsity regularization together with an ℓ1 norm based data fidelity term. The use of the non-smooth data fidelity term results in robustness against outliers and impulsive noise in the observed data. The resulting algorithm with the ℓ1 observation fidelity showcases enhanced performance under impulsive observation noise when compared to a similar algorithm utilizing the conventional quadratic error term.},\n  keywords = {image reconstruction;impulse noise;magnetic resonance imaging;transforms;magnetic resonance imaging reconstruction;MRI reconstruction;analysis sparse regularization;transform learning framework;variational reconstruction problem;ℓ1 norm based data fidelity term;nonsmooth data fidelity term;impulsive observation noise;Image reconstruction;Signal processing algorithms;Magnetic resonance imaging;Transforms;Algorithm design and analysis;Analytical models;Reconstruction algorithms;Magnetic resonance;image reconstruction;compressed sensing;analysis sparsity;impulsive noise},\n  doi = {10.1109/EUSIPCO.2016.7760306},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255530.pdf},\n}\n\n
\n
\n\n\n
\n We will be considering analysis sparsity based regularization for Magnetic Resonance Imaging reconstruction. The analysis sparsity regularization is based on the recently introduced Transform Learning framework, which has reduced complexity regarding other sparse regularization methods. We will formulate a variational reconstruction problem which utilizes the analysis sparsity regularization together with an ℓ1 norm based data fidelity term. The use of the non-smooth data fidelity term results in robustness against outliers and impulsive noise in the observed data. The resulting algorithm with the ℓ1 observation fidelity showcases enhanced performance under impulsive observation noise when compared to a similar algorithm utilizing the conventional quadratic error term.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Image deconvolution by local order preservation of pixels values.\n \n \n \n \n\n\n \n Guérit, S.; Jacques, L.; and Lee, J. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 542-546, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ImagePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760307,\n  author = {S. Guérit and L. Jacques and J. A. Lee},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Image deconvolution by local order preservation of pixels values},\n  year = {2016},\n  pages = {542-546},\n  abstract = {Positron emission tomography is more and more used in radiation oncology, since it conveys useful functional information about cancerous lesions. Its rather low spatial resolution, however, prevents accurate tumor delineation and heterogeneity assessment. Post-reconstruction deconvolution with the measured point-spread function can address this issue, provided it does not introduce undesired artifacts. These usually result from inappropriate regularization, which is either absent or making too strong assumptions about the structure of the signal. This paper proposes a deconvolution method that is based on inverse problem theory and involves a new regularization term that preserves local pixel value order relationships. Such regularization entails relatively mild constraints that are directly inferred from the observed data. This paper investigates the theoretical properties of the proposed regularization and describes its numerical implementation with a primal-dual algorithm. Preliminary experiments with synthetic images are presented to compare quantitatively and qualitatively the proposed method to other regularization schemes, like TV and TGV.},\n  keywords = {deconvolution;image reconstruction;inverse problems;optical transfer function;positron emission tomography;synthetic images;primal-dual algorithm;mild constraints;inverse problem theory;signal structure;inappropriate regularization;point spread function;post-reconstruction deconvolution;heterogeneity assessment;tumor delineation;spatial resolution;cancerous lesions;functional information;radiation oncology;positron emission tomography;local pixel value;local order preservation;image deconvolution;Positron emission tomography;Deconvolution;Kernel;Spatial resolution;Image restoration;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760307},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255872.pdf},\n}\n\n
\n
\n\n\n
\n Positron emission tomography is more and more used in radiation oncology, since it conveys useful functional information about cancerous lesions. Its rather low spatial resolution, however, prevents accurate tumor delineation and heterogeneity assessment. Post-reconstruction deconvolution with the measured point-spread function can address this issue, provided it does not introduce undesired artifacts. These usually result from inappropriate regularization, which is either absent or making too strong assumptions about the structure of the signal. This paper proposes a deconvolution method that is based on inverse problem theory and involves a new regularization term that preserves local pixel value order relationships. Such regularization entails relatively mild constraints that are directly inferred from the observed data. This paper investigates the theoretical properties of the proposed regularization and describes its numerical implementation with a primal-dual algorithm. Preliminary experiments with synthetic images are presented to compare quantitatively and qualitatively the proposed method to other regularization schemes, like TV and TGV.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Canonical polyadic decomposition for tissue type differentiation using multi-parametric MRI in high-grade gliomas.\n \n \n \n \n\n\n \n Bharath, H. N.; Sauwen, N.; Sima, D. M.; Himmelreich, U.; De Lathauwer, L.; and Van Huffel, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 547-551, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CanonicalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760308,\n  author = {H. N. Bharath and N. Sauwen and D. M. Sima and U. Himmelreich and L. {De Lathauwer} and S. {Van Huffel}},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Canonical polyadic decomposition for tissue type differentiation using multi-parametric MRI in high-grade gliomas},\n  year = {2016},\n  pages = {547-551},\n  abstract = {In diagnosis and treatment planning of brain tumors, characterisation and localization of tissue plays an important role. Blind source separation techniques are generally employed to extract the tissue-specific profiles and its corresponding distribution from the multi-parametric MRI. A 3-dimensional tensor is constructed from in-vivo multi-parametric MRI of high grade glioma patients. Constrained canonical polyadic decomposition (CPD) with common factor in mode-1 and mode-2 and l1 regularization on mode-3 is applied on the 3-dimensional multi-parametric tensor to characterize various tissue types. An initial in-vivo study shows that CPD has slightly better performance in identifying active tumor and the tumor core region in high-grade glioma patients compared to hierarchical non-negative matrix factorization.},\n  keywords = {biomedical MRI;blind source separation;medical image processing;patient treatment;canonical polyadic decomposition;tissue type differentiation;multiparametric MRI;brain tumors;blind source separation technique;CPD;3-dimensional multiparametric tensor;active tumor;tumor core region;high-grade glioma patients;Tensile stress;Magnetic resonance imaging;Tumors;Signal processing algorithms;Correlation;Signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760308},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256114.pdf},\n}\n\n
\n
\n\n\n
\n In diagnosis and treatment planning of brain tumors, characterisation and localization of tissue plays an important role. Blind source separation techniques are generally employed to extract the tissue-specific profiles and its corresponding distribution from the multi-parametric MRI. A 3-dimensional tensor is constructed from in-vivo multi-parametric MRI of high grade glioma patients. Constrained canonical polyadic decomposition (CPD) with common factor in mode-1 and mode-2 and l1 regularization on mode-3 is applied on the 3-dimensional multi-parametric tensor to characterize various tissue types. An initial in-vivo study shows that CPD has slightly better performance in identifying active tumor and the tumor core region in high-grade glioma patients compared to hierarchical non-negative matrix factorization.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse regularization methods in ultrafast ultrasound imaging.\n \n \n \n \n\n\n \n Besson, A.; Carrillo, R. E.; Zhang, M.; Friboulet, D.; Bernard, O.; Wiaux, Y.; and Thiran, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 552-556, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760309,\n  author = {A. Besson and R. E. Carrillo and M. Zhang and D. Friboulet and O. Bernard and Y. Wiaux and J. Thiran},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse regularization methods in ultrafast ultrasound imaging},\n  year = {2016},\n  pages = {552-556},\n  abstract = {Ultrafast ultrasound (US) imaging based on plane wave (PW) insonification is a widely used modality nowadays. Two main types of approaches have been proposed for image reconstruction either based on classical delay-and-sum (DAS) or on Fourier reconstruction. Using a single PW, these methods lead to a lower image quality than DAS with multi-focused beams. In this paper we review recent beamforming approaches based on sparse regularization methods. The imaging problem, either spatial-based (DAS) or Fourier-based, is formulated as a linear inverse problem and convex optimization algorithms coupled with sparsity priors are used to solve the ill-posed problem. We describe two applications of the framework namely the sparse inversion of the beamforming problem and the compressed beamforming in which the framework is combined with compressed sensing. Based on numerical simulations and experimental studies, we show the advantage of the proposed methods in terms of image quality compared to classical methods.},\n  keywords = {array signal processing;compressed sensing;Fourier transforms;image reconstruction;image resolution;inverse problems;linear programming;numerical analysis;ultrasonic imaging;sparse regularization methods;ultrafast ultrasound imaging;plane wave insonification;image reconstruction;classical delay-and-sum;DAS;Fourier reconstruction;image quality;multifocused beams;linear inverse problem;convex optimization algorithms;compressed beamforming;compressed sensing;Array signal processing;Mathematical model;Imaging;Image quality;Image reconstruction;Interpolation;Inverse problems;Ultrasound;plane wave imaging;sparsity;compressed sensing},\n  doi = {10.1109/EUSIPCO.2016.7760309},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256152.pdf},\n}\n\n
\n
\n\n\n
\n Ultrafast ultrasound (US) imaging based on plane wave (PW) insonification is a widely used modality nowadays. Two main types of approaches have been proposed for image reconstruction either based on classical delay-and-sum (DAS) or on Fourier reconstruction. Using a single PW, these methods lead to a lower image quality than DAS with multi-focused beams. In this paper we review recent beamforming approaches based on sparse regularization methods. The imaging problem, either spatial-based (DAS) or Fourier-based, is formulated as a linear inverse problem and convex optimization algorithms coupled with sparsity priors are used to solve the ill-posed problem. We describe two applications of the framework namely the sparse inversion of the beamforming problem and the compressed beamforming in which the framework is combined with compressed sensing. Based on numerical simulations and experimental studies, we show the advantage of the proposed methods in terms of image quality compared to classical methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Ultrasound image reconstruction from compressed measurements using approximate message passing.\n \n \n \n \n\n\n \n Kim, J.; Basarab, A.; Hill, P. R.; Bull, D. R.; Kouamé, D.; and Achim, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 557-561, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"UltrasoundPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760310,\n  author = {J. Kim and A. Basarab and P. R. Hill and D. R. Bull and D. Kouamé and A. Achim},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Ultrasound image reconstruction from compressed measurements using approximate message passing},\n  year = {2016},\n  pages = {557-561},\n  abstract = {In this paper we propose a novel framework for compressive sampling reconstruction of biomedical ultrasonic images based on the Approximate Message Passing (AMP) algorithm. AMP is an iterative algorithm that performs image reconstruction through image denoising within a compressive sampling framework. In this work, our aim is to evaluate the merits of several combinations of a denoiser and a transform domain, which are the two main factors that determine the recovery performance. In particular, we investigate reconstruction performance in the spatial, DCT, and wavelet domains. We compare the results with existing reconstruction algorithms already used in ultrasound imaging and quantify the performance improvement.},\n  keywords = {biomedical ultrasonics;compressed sensing;discrete cosine transforms;image denoising;image reconstruction;iterative methods;medical image processing;message passing;wavelet transforms;compressive sampling reconstruction;biomedical ultrasonic images;approximate message passing algorithm;AMP algorithm;iterative algorithm;image reconstruction;image denoising;transform domain;spatial domains;DCT domains;wavelet domains;Image reconstruction;Discrete cosine transforms;Signal processing algorithms;Ultrasonic imaging;Image coding;Wavelet domain;ultrasonic images;Compressive Sampling;nonconvex optimization;IRLS;AMP;image denoising},\n  doi = {10.1109/EUSIPCO.2016.7760310},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256168.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we propose a novel framework for compressive sampling reconstruction of biomedical ultrasonic images based on the Approximate Message Passing (AMP) algorithm. AMP is an iterative algorithm that performs image reconstruction through image denoising within a compressive sampling framework. In this work, our aim is to evaluate the merits of several combinations of a denoiser and a transform domain, which are the two main factors that determine the recovery performance. In particular, we investigate reconstruction performance in the spatial, DCT, and wavelet domains. We compare the results with existing reconstruction algorithms already used in ultrasound imaging and quantify the performance improvement.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compressed sensing in MRI with a Markov random field prior for spatial clustering of subband coefficients.\n \n \n \n \n\n\n \n Panić, M.; Aelterman, J.; Crnojević, V.; and Pižurica, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 562-566, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CompressedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760311,\n  author = {M. Panić and J. Aelterman and V. Crnojević and A. Pižurica},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Compressed sensing in MRI with a Markov random field prior for spatial clustering of subband coefficients},\n  year = {2016},\n  pages = {562-566},\n  abstract = {Recent work in compressed sensing of magnetic resonance images (CS-MRI) concentrates on encoding structured sparsity in acquisition or in the reconstruction stages. Subband coefficients of typical images obey a certain structure, which can be viewed in terms of fixed groups (like wavelet trees) or statistically (certain configurations are more likely than others). Approaches using wavelet tree-sparsity have already demonstrated excellent performance in MRI. However, the use of statistical models for spatial clustering of the subband coefficients has not been studied well in CS-MRI yet, although the potentials of such an approach have been indicated. In this paper, we design a practical reconstruction algorithm as a variant of the proximal splitting methods, making use of a Markov Random Field prior model for spatial clustering of subband coefficients. The results for different undersampling patterns demonstrate an improved reconstruction performance compared to both standard CS-MRI methods and methods based on wavelet tree sparsity.},\n  keywords = {biomedical MRI;compressed sensing;image reconstruction;Markov processes;trees (mathematics);wavelet transforms;compressed sensing;magnetic resonance images;CS-MRI methods;Markov random field;spatial clustering;subband coefficients;wavelet tree-sparsity;proximal splitting methods;undersampling patterns;Magnetic resonance imaging;Signal processing algorithms;Image reconstruction;Markov random fields;Inference algorithms;Wavelet transforms},\n  doi = {10.1109/EUSIPCO.2016.7760311},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256318.pdf},\n}\n\n
\n
\n\n\n
\n Recent work in compressed sensing of magnetic resonance images (CS-MRI) concentrates on encoding structured sparsity in acquisition or in the reconstruction stages. Subband coefficients of typical images obey a certain structure, which can be viewed in terms of fixed groups (like wavelet trees) or statistically (certain configurations are more likely than others). Approaches using wavelet tree-sparsity have already demonstrated excellent performance in MRI. However, the use of statistical models for spatial clustering of the subband coefficients has not been studied well in CS-MRI yet, although the potentials of such an approach have been indicated. In this paper, we design a practical reconstruction algorithm as a variant of the proximal splitting methods, making use of a Markov Random Field prior model for spatial clustering of subband coefficients. The results for different undersampling patterns demonstrate an improved reconstruction performance compared to both standard CS-MRI methods and methods based on wavelet tree sparsity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Decentralized detection in energy harvesting wireless sensor networks.\n \n \n \n \n\n\n \n Tarighati, A.; Gross, J.; and Jaldén, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 567-571, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DecentralizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760312,\n  author = {A. Tarighati and J. Gross and J. Jaldén},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Decentralized detection in energy harvesting wireless sensor networks},\n  year = {2016},\n  pages = {567-571},\n  abstract = {We consider a decentralized hypothesis testing problem in which several peripheral energy harvesting sensors are arranged in parallel. Each sensor makes a noisy observation of a time varying phenomenon, and sends a message about the present hypothesis towards a fusion center at each time instance t. The fusion center, using the aggregate of the received messages during the time instance t, makes a decision about the state of the present hypothesis. We assume that each sensor is an energy harvesting device and is capable of harvesting all the energy it needs to communicate from its environment. Our contribution is to formulate and analyze the decentralized detection problem when the energy harvesting sensors are allowed to form a long term energy usage policy. Our analysis is based on a queuing-theoretic model for the battery. Then, by using numerical simulations, we show how the resulting performance differs from the energy-unconstrained case.},\n  keywords = {energy harvesting;queueing theory;wireless sensor networks;energy harvesting wireless sensor networks;decentralized hypothesis testing problem;fusion center;time instance;energy harvesting device;decentralized detection problem;energy usage policy;queuing-theoretic model;Batteries;Energy harvesting;Wireless sensor networks;Sensor phenomena and characterization;Signal processing;Steady-state},\n  doi = {10.1109/EUSIPCO.2016.7760312},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255086.pdf},\n}\n\n
\n
\n\n\n
\n We consider a decentralized hypothesis testing problem in which several peripheral energy harvesting sensors are arranged in parallel. Each sensor makes a noisy observation of a time varying phenomenon, and sends a message about the present hypothesis towards a fusion center at each time instance t. The fusion center, using the aggregate of the received messages during the time instance t, makes a decision about the state of the present hypothesis. We assume that each sensor is an energy harvesting device and is capable of harvesting all the energy it needs to communicate from its environment. Our contribution is to formulate and analyze the decentralized detection problem when the energy harvesting sensors are allowed to form a long term energy usage policy. Our analysis is based on a queuing-theoretic model for the battery. Then, by using numerical simulations, we show how the resulting performance differs from the energy-unconstrained case.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Transmission strategies for remote estimation under energy harvesting constraints.\n \n \n \n \n\n\n \n Özçelikkale, A.; McKelvey, T.; and Viberg, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 572-576, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"TransmissionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760313,\n  author = {A. Özçelikkale and T. McKelvey and M. Viberg},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Transmission strategies for remote estimation under energy harvesting constraints},\n  year = {2016},\n  pages = {572-576},\n  abstract = {We consider the remote estimation of a time-correlated field using an energy harvesting (EH) sensor. The sensor observes the unknown field and communicates its observations to a remote fusion center using an amplify-forward strategy. We consider the design of optimal transmission strategies in order to minimize the mean-square error (MSE) at the fusion center. Contrary to traditional approaches, the degree of correlation between the field values constitutes an important aspect of our formulation. We provide the optimal power allocation strategies for a number of illustrative scenarios, including the circularly wide-sense stationary (c.w.s.s.) signals with static correlation coefficient and the sampled low-pass c.w.s.s. signals. Based on these results, we propose low-complexity policies for the general case. Numerical evaluations illustrate the performance of the optimal and the low-complexity policies.},\n  keywords = {amplify and forward communication;energy harvesting;mean square error methods;telecommunication power management;energy harvesting sensor;EH sensor;time-correlated field remote estimation;remote fusion center;amplify-forward strategy;optimal transmission strategy;mean-square error minimization;MSE minimization;optimal power allocation strategy;circularly wide-sense stationary signal;sampled low-pass c.w.s.s. signal;static correlation coefficient;Estimation;Resource management;Energy harvesting;Correlation;Batteries;Eigenvalues and eigenfunctions;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760313},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256174.pdf},\n}\n\n
\n
\n\n\n
\n We consider the remote estimation of a time-correlated field using an energy harvesting (EH) sensor. The sensor observes the unknown field and communicates its observations to a remote fusion center using an amplify-forward strategy. We consider the design of optimal transmission strategies in order to minimize the mean-square error (MSE) at the fusion center. Contrary to traditional approaches, the degree of correlation between the field values constitutes an important aspect of our formulation. We provide the optimal power allocation strategies for a number of illustrative scenarios, including the circularly wide-sense stationary (c.w.s.s.) signals with static correlation coefficient and the sampled low-pass c.w.s.s. signals. Based on these results, we propose low-complexity policies for the general case. Numerical evaluations illustrate the performance of the optimal and the low-complexity policies.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sensing throughput optimization in cognitive fading multiple access channels with energy harvesting secondary transmitters.\n \n \n \n \n\n\n \n Biswas, S.; Shirazinia, A.; and Dey, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 577-581, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SensingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760314,\n  author = {S. Biswas and A. Shirazinia and S. Dey},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Sensing throughput optimization in cognitive fading multiple access channels with energy harvesting secondary transmitters},\n  year = {2016},\n  pages = {577-581},\n  abstract = {The paper investigates the problem of maximizing the expected achievable sum rate in a fading multiple access cognitive radio network when secondary user (SU) transmitters have energy harvesting capability, and perform cooperative spectrum sensing. We formulate the problem as maximization of throughput of the cognitive multiple access network over a finite time horizon subject to a time averaged interference constraint at the primary user (PU) and almost sure energy causality constraints at the SUs. The problem is a mixed integer nonlinear program with respect to two decision variables, namely, spectrum access decision and spectrum sensing decision, and the continuous variables sensing time and transmission power. In general, this problem is known to be NP hard. For optimization over these two decision variables, we use an exhaustive search policy when the length of the time horizon is small, and a heuristic policy for longer horizons. For given values of the decision variables, the problem simplifies into a joint optimization on SU transmission power and sensing time, which is non-convex in nature. We present an analytic solution for the resulting optimization problem using an alternating convex optimization problem for non-causal channel state information and harvested energy information patterns at the SU base station (SBS) or fusion center (FC) and infinite battery capacity at the SU transmitters. We formulate the problem with causal information and finite battery capacity as a stochastic control problem and solve it using the technique of dynamic programming. Numerical results are presented to illustrate the performance of the various algorithms.},\n  keywords = {cognitive radio;concave programming;convex programming;cooperative communication;fading channels;integer programming;multi-access systems;multiuser channels;radio spectrum management;radio transmitters;radiofrequency interference;search problems;signal detection;stochastic programming;cognitive fading multiple access channel;sensing throughput optimization;energy harvesting secondary transmitter;achievable sum rate;fading multiple access cognitive radio network;secondary user transmitter;SU transmitter;cooperative spectrum sensing;cognitive multiple access network throughput maximization;finite time horizon subject;time averaged interference constraint;primary user;PU;mixed integer nonlinear program;spectrum access decision;spectrum sensing decision;optimization;exhaustive search policy;nonconvex optimization;alternating convex optimization problem;noncausal channel state information;SU base station;SBS;fusion center;FC;finite battery capacity;causal information;stochastic control problem;dynamic programming;Sensors;Batteries;Optimization;Energy harvesting;Throughput;Fading channels;Transmitters},\n  doi = {10.1109/EUSIPCO.2016.7760314},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256191.pdf},\n}\n\n
\n
\n\n\n
\n The paper investigates the problem of maximizing the expected achievable sum rate in a fading multiple access cognitive radio network when secondary user (SU) transmitters have energy harvesting capability, and perform cooperative spectrum sensing. We formulate the problem as maximization of throughput of the cognitive multiple access network over a finite time horizon subject to a time averaged interference constraint at the primary user (PU) and almost sure energy causality constraints at the SUs. The problem is a mixed integer nonlinear program with respect to two decision variables, namely, spectrum access decision and spectrum sensing decision, and the continuous variables sensing time and transmission power. In general, this problem is known to be NP hard. For optimization over these two decision variables, we use an exhaustive search policy when the length of the time horizon is small, and a heuristic policy for longer horizons. For given values of the decision variables, the problem simplifies into a joint optimization on SU transmission power and sensing time, which is non-convex in nature. We present an analytic solution for the resulting optimization problem using an alternating convex optimization problem for non-causal channel state information and harvested energy information patterns at the SU base station (SBS) or fusion center (FC) and infinite battery capacity at the SU transmitters. We formulate the problem with causal information and finite battery capacity as a stochastic control problem and solve it using the technique of dynamic programming. Numerical results are presented to illustrate the performance of the various algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Decentralized sparsity-promoting sensor selection in energy harvesting wireless sensor networks.\n \n \n \n \n\n\n \n Calvo-Fullana, M.; Matamoros, J.; and Antón-Haro, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 582-586, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DecentralizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760315,\n  author = {M. Calvo-Fullana and J. Matamoros and C. Antón-Haro},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Decentralized sparsity-promoting sensor selection in energy harvesting wireless sensor networks},\n  year = {2016},\n  pages = {582-586},\n  abstract = {This paper considers the problem of sensor selection for the estimation of a stochastic source, being the sensor nodes powered by energy harvesting. Therefore, the interest lies in selecting the subset of most informative sensors that transmit their observations to a fusion center (FC). To that end, we propose to minimize the attained distortion at the FC plus a penalization term that promotes sparsity on the power allocation at the sensors. Then, we propose a decentralized algorithm in which the power allocation (and, thus, the selection policy) and distortion minimization problems can be regarded as separated problems. More specifically, the algorithm consists of: (i) a local computation of the power allocation policy, and (ii) a distortion minimization step. Moreover, for the case where sparsity is promoted via the classical ℓ1 norm, we show that the resulting local power allocation policy can be readily computed by means of a waterfilling-like algorithm.},\n  keywords = {energy harvesting;resource allocation;sensor fusion;telecommunication power management;wireless sensor networks;wireless sensor networks;energy harvesting;decentralized sparsity-promoting sensor selection;sensor selection problem;stochastic source estimation;informative sensors;fusion center;distortion minimization problems;separated problems;classical ℓ1 norm;local power allocation policy;waterfilling-like algorithm;Resource management;Energy harvesting;Signal processing algorithms;Distortion;Wireless sensor networks;Optimization;Encoding;Sensor selection;energy harvesting;sparsity;wireless sensor networks},\n  doi = {10.1109/EUSIPCO.2016.7760315},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256258.pdf},\n}\n\n
\n
\n\n\n
\n This paper considers the problem of sensor selection for the estimation of a stochastic source, being the sensor nodes powered by energy harvesting. Therefore, the interest lies in selecting the subset of most informative sensors that transmit their observations to a fusion center (FC). To that end, we propose to minimize the attained distortion at the FC plus a penalization term that promotes sparsity on the power allocation at the sensors. Then, we propose a decentralized algorithm in which the power allocation (and, thus, the selection policy) and distortion minimization problems can be regarded as separated problems. More specifically, the algorithm consists of: (i) a local computation of the power allocation policy, and (ii) a distortion minimization step. Moreover, for the case where sparsity is promoted via the classical ℓ1 norm, we show that the resulting local power allocation policy can be readily computed by means of a waterfilling-like algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Energy harvesting cooperative multiple access channel with decoding costs.\n \n \n \n \n\n\n \n Arafa, A.; Kaya, O.; and Ulukus, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 587-591, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EnergyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760316,\n  author = {A. Arafa and O. Kaya and S. Ulukus},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Energy harvesting cooperative multiple access channel with decoding costs},\n  year = {2016},\n  pages = {587-591},\n  abstract = {We consider an energy harvesting cooperative multiple access channel (MAC) with decoding costs. In this setting, users cooperate at the physical layer (data cooperation) in order to increase the achievable rates. Data cooperation comes at the expense of decoding costs: each user spends some amount of its harvested energy to decode the message of the other user, before forwarding both messages to the receiver. The decoding power spent is an increasing convex function of the incoming message rate. We characterize the optimal power scheduling policies that achieve the boundary of the maximum departure region subject to energy causality constraints and decoding costs by using a generalized water-filling algorithm.},\n  keywords = {convex programming;cooperative communication;decoding;energy harvesting;multi-access systems;multiuser channels;telecommunication scheduling;wireless channels;energy harvesting cooperative multiple access channel;decoding cost;MAC;physical layer;data cooperation;receiver;convex function;optimal power scheduling policy;maximum departure region boundary;energy causality constraint;generalized water-filling algorithm;Decoding;Silicon;Energy harvesting;Receivers;Europe;Signal processing;Convex functions},\n  doi = {10.1109/EUSIPCO.2016.7760316},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255886.pdf},\n}\n\n
\n
\n\n\n
\n We consider an energy harvesting cooperative multiple access channel (MAC) with decoding costs. In this setting, users cooperate at the physical layer (data cooperation) in order to increase the achievable rates. Data cooperation comes at the expense of decoding costs: each user spends some amount of its harvested energy to decode the message of the other user, before forwarding both messages to the receiver. The decoding power spent is an increasing convex function of the incoming message rate. We characterize the optimal power scheduling policies that achieve the boundary of the maximum departure region subject to energy causality constraints and decoding costs by using a generalized water-filling algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Transmission policies in wireless powered communication networks with energy cooperation.\n \n \n \n \n\n\n \n Biason, A.; and Zorzi, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 592-596, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"TransmissionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760317,\n  author = {A. Biason and M. Zorzi},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Transmission policies in wireless powered communication networks with energy cooperation},\n  year = {2016},\n  pages = {592-596},\n  abstract = {Energy Harvesting (EH) has been recognized as one of the most appealing solutions for extending the devices lifetime in wireless sensor networks. Despite the vast literature available about ambient EH, in the last few years Energy Transfer (ET) has been introduced as a new and promising paradigm. With ET, it becomes possible to actively control the energy source and thus improve the network performance. We focus on two particular applications of ET which have been studied separately in the literature so far: Energy Cooperation (EC) and Wireless Powered Communication Networks (WPCNs). In the first case, energy is wirelessly shared among terminal devices according to their requirements and energy availability, whereas, in a WPCN, energy can be purposely transferred from an energy-rich network node (e.g., an access point) to terminal devices. We solve a weighted throughput optimization problem for the two-node case using optimal as well as sub-optimal schemes. Numerically, we explain the role of EC in improving the system performance.},\n  keywords = {energy harvesting;telecommunication power management;wireless sensor networks;transmission policies;wireless powered communication network;energy cooperation;energy harvesting;wireless sensor network;ET network performance;energy transfer applications;WPCN;terminal device;energy-rich network node;weighted throughput optimization problem;Uplink;Batteries;Downlink;Performance evaluation;Energy exchange;Optimization;Wireless sensor networks},\n  doi = {10.1109/EUSIPCO.2016.7760317},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252315.pdf},\n}\n\n
\n
\n\n\n
\n Energy Harvesting (EH) has been recognized as one of the most appealing solutions for extending the devices lifetime in wireless sensor networks. Despite the vast literature available about ambient EH, in the last few years Energy Transfer (ET) has been introduced as a new and promising paradigm. With ET, it becomes possible to actively control the energy source and thus improve the network performance. We focus on two particular applications of ET which have been studied separately in the literature so far: Energy Cooperation (EC) and Wireless Powered Communication Networks (WPCNs). In the first case, energy is wirelessly shared among terminal devices according to their requirements and energy availability, whereas, in a WPCN, energy can be purposely transferred from an energy-rich network node (e.g., an access point) to terminal devices. We solve a weighted throughput optimization problem for the two-node case using optimal as well as sub-optimal schemes. Numerically, we explain the role of EC in improving the system performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Generalized constraints for NMF with application to informed source separation.\n \n \n \n \n\n\n \n Rohlfing, C.; and Becker, J. M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 597-601, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"GeneralizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760318,\n  author = {C. Rohlfing and J. M. Becker},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Generalized constraints for NMF with application to informed source separation},\n  year = {2016},\n  pages = {597-601},\n  abstract = {Nonnegative matrix factorization (NMF) is a widely used method for audio source separation. Additional constraints supporting e.g. temporal continuity or sparseness adapt NMF to the structure of audio signals even further. In this paper, we propose generalized NMF constraints which make use of prior information gathered for each component individually. In general, this information could be obtained blindly or by a training step. Here we make use of these novel constraints in an algorithm for informed audio source separation (ISS). ISS uses source separation to code audio objects by assisting a source separation step in the decoder with parameters extracted with knowledge of the sources in the encoder. In [1], a novel algorithm for ISS was proposed which makes use of an NMF step in the decoder. We show in experiments that the generalized constraints enhance the separation quality while keeping the additionally needed bit rate very low.},\n  keywords = {audio coding;decoding;matrix decomposition;source separation;generalized constraint;NMF;nonnegative matrix factorization;audio source separation;ISS;audio object coding;decoder;separation quality generalized constraint enhancement;parameter extraction;Decoding;Source separation;Spectrogram;Signal processing algorithms;Encoding;Cost function;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760318},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251606.pdf},\n}\n\n
\n
\n\n\n
\n Nonnegative matrix factorization (NMF) is a widely used method for audio source separation. Additional constraints supporting e.g. temporal continuity or sparseness adapt NMF to the structure of audio signals even further. In this paper, we propose generalized NMF constraints which make use of prior information gathered for each component individually. In general, this information could be obtained blindly or by a training step. Here we make use of these novel constraints in an algorithm for informed audio source separation (ISS). ISS uses source separation to code audio objects by assisting a source separation step in the decoder with parameters extracted with knowledge of the sources in the encoder. In [1], a novel algorithm for ISS was proposed which makes use of an NMF step in the decoder. We show in experiments that the generalized constraints enhance the separation quality while keeping the additionally needed bit rate very low.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multiple source localization in the spherical harmonic domain using augmented intensity vectors based on grid search.\n \n \n \n \n\n\n \n Hafezi, S.; Moore, A. H.; and Naylor, P. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 602-606, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MultiplePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760319,\n  author = {S. Hafezi and A. H. Moore and P. A. Naylor},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Multiple source localization in the spherical harmonic domain using augmented intensity vectors based on grid search},\n  year = {2016},\n  pages = {602-606},\n  abstract = {Multiple source localization is an important task in acoustic signal processing with applications including dereverberation, source separation, source tracking and environment mapping. When using spherical microphone arrays, it has been previously shown that Pseudo-intensity Vectors (PIV), and Augmented Intensity Vectors (AIV), are an effective approach for direction of arrival estimation of a sound source. In this paper, we evaluate AIV-based localization in acoustic scenarios involving multiple sound sources. Simulations are conducted where the number of sources, their angular separation and the reverberation time of the room are varied. The results indicate that AIV outperforms PIV and Steered Response Power (SRP) with an average accuracy between 5 and 10 degrees for sources with angular separation of 30 degrees or more. AIV also shows better robustness to reverberation time than PIV and SRP.},\n  keywords = {acoustic signal processing;direction-of-arrival estimation;harmonics;reverberation;search problems;source separation;steered response power;angular separation;direction of arrival estimation;PIV;AIV-based localization;Pseudo-intensity vector;spherical microphone array;environment mapping applications;source tracking applications;source separation applications;dereverberation applications;acoustic signal processing;grid search;augmented intensity vector;spherical harmonic domain;multiple source localization;Direction-of-arrival estimation;Harmonic analysis;Smoothing methods;Microphone arrays;Array signal processing;Reverberation;spherical microphone arrays;localization;direction-of-arrival estimation;spherical harmonic;intensity vector},\n  doi = {10.1109/EUSIPCO.2016.7760319},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251800.pdf},\n}\n\n
\n
\n\n\n
\n Multiple source localization is an important task in acoustic signal processing with applications including dereverberation, source separation, source tracking and environment mapping. When using spherical microphone arrays, it has been previously shown that Pseudo-intensity Vectors (PIV), and Augmented Intensity Vectors (AIV), are an effective approach for direction of arrival estimation of a sound source. In this paper, we evaluate AIV-based localization in acoustic scenarios involving multiple sound sources. Simulations are conducted where the number of sources, their angular separation and the reverberation time of the room are varied. The results indicate that AIV outperforms PIV and Steered Response Power (SRP) with an average accuracy between 5 and 10 degrees for sources with angular separation of 30 degrees or more. AIV also shows better robustness to reverberation time than PIV and SRP.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 3WRBM-based speech factor modeling for arbitrary-source and non-parallel voice conversion.\n \n \n \n \n\n\n \n Nakashika, T.; and Minami, Y.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 607-611, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"3WRBM-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760320,\n  author = {T. Nakashika and Y. Minami},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {3WRBM-based speech factor modeling for arbitrary-source and non-parallel voice conversion},\n  year = {2016},\n  pages = {607-611},\n  abstract = {In recent years, voice conversion (VC) becomes a popular technique since it can be applied to various speech tasks. Most existing approaches on VC must use aligned speech pairs (parallel data) of the source speaker and the target speaker in training, which makes hard to handle it. Furthermore, VC methods proposed so far require to specify the source speaker in conversion stage, even though we just want to obtain the speech of the target speaker from the other speakers in many cases of VC. In this paper, we propose a VC method where it is not necessary to use any parallel data in the training, nor to specify the source speaker in the conversion. Our approach models a joint probability of acoustic, phonetic, and speaker features using a three-way restricted Boltzmann machine (3WRBM). Speaker-independent (SI) and speaker-dependent (SD) parameters in our model are simultaneously estimated under the maximum likelihood (ML) criteria using a speech set of multiple speakers. In conversion stage, phonetic features are at first estimated in a probabilistic manner given a speech of an arbitrary speaker, then a voice-converted speech is produced using the SD parameters of the target speaker. Our experimental results showed not only that our approach outperformed other non-parallel VC methods, but that the performance of the arbitrary-source VC was close to those of the traditional source-specified VC in our approach.},\n  keywords = {Boltzmann machines;maximum likelihood estimation;speech processing;3WRBM-based speech factor modeling;arbitrary-source and nonparallel voice conversion;source speaker;three-way restricted Boltzmann machine;speaker-independent parameter;speaker-dependent parameter;maximum likelihood criteria;Speech;Acoustics;Training;Data models;Probabilistic logic;Europe;Signal processing;Voice conversion;three-way restricted Boltzmann machine;unsupervised learning;speaker adaptation;nonparallel training},\n  doi = {10.1109/EUSIPCO.2016.7760320},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251840.pdf},\n}\n\n
\n
\n\n\n
\n In recent years, voice conversion (VC) becomes a popular technique since it can be applied to various speech tasks. Most existing approaches on VC must use aligned speech pairs (parallel data) of the source speaker and the target speaker in training, which makes hard to handle it. Furthermore, VC methods proposed so far require to specify the source speaker in conversion stage, even though we just want to obtain the speech of the target speaker from the other speakers in many cases of VC. In this paper, we propose a VC method where it is not necessary to use any parallel data in the training, nor to specify the source speaker in the conversion. Our approach models a joint probability of acoustic, phonetic, and speaker features using a three-way restricted Boltzmann machine (3WRBM). Speaker-independent (SI) and speaker-dependent (SD) parameters in our model are simultaneously estimated under the maximum likelihood (ML) criteria using a speech set of multiple speakers. In conversion stage, phonetic features are at first estimated in a probabilistic manner given a speech of an arbitrary speaker, then a voice-converted speech is produced using the SD parameters of the target speaker. Our experimental results showed not only that our approach outperformed other non-parallel VC methods, but that the performance of the arbitrary-source VC was close to those of the traditional source-specified VC in our approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Development and evaluation of a digital MEMS microphone array for spatial audio.\n \n \n \n \n\n\n \n Alexandridis, A.; Papadakis, S.; Pavlidi, D.; and Mouchtaris, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 612-616, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DevelopmentPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760321,\n  author = {A. Alexandridis and S. Papadakis and D. Pavlidi and A. Mouchtaris},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Development and evaluation of a digital MEMS microphone array for spatial audio},\n  year = {2016},\n  pages = {612-616},\n  abstract = {We present the design of a digital microphone array comprised of MEMS microphones and evaluate its potential for spatial audio capturing and direction-of-arrival (DOA) estimation which is an essential part of encoding the soundscape. The device is a cheaper and more compact alternative to analog microphone arrays which require external - and usually expensive - analog-to-digital converters and sound cards. However, the performance of such digital arrays for DOA estimation and spatial audio acquisition has not been investigated. In this work, the efficiency of the digital array for spatial audio is evaluated and compared to a typical analog microphone array of the same geometry. Our results indicate that our digital array achieves the same performance as its analog counterpart, thus offering a cheaper and easily deployable device, suitable for spatial audio applications.},\n  keywords = {audio signal processing;direction-of-arrival estimation;micromechanical devices;microphone arrays;digital array;spatial audio acquisition;DOA estimation;soundscape encoding;direction-of-arrival estimation;digital MEMS microphone array evaluation;Microphone arrays;Array signal processing;Direction-of-arrival estimation;Micromechanical devices;Estimation;Loudspeakers},\n  doi = {10.1109/EUSIPCO.2016.7760321},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252075.pdf},\n}\n\n
\n
\n\n\n
\n We present the design of a digital microphone array comprised of MEMS microphones and evaluate its potential for spatial audio capturing and direction-of-arrival (DOA) estimation which is an essential part of encoding the soundscape. The device is a cheaper and more compact alternative to analog microphone arrays which require external - and usually expensive - analog-to-digital converters and sound cards. However, the performance of such digital arrays for DOA estimation and spatial audio acquisition has not been investigated. In this work, the efficiency of the digital array for spatial audio is evaluated and compared to a typical analog microphone array of the same geometry. Our results indicate that our digital array achieves the same performance as its analog counterpart, thus offering a cheaper and easily deployable device, suitable for spatial audio applications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n TDOA-based self-calibration of dual-microphone arrays.\n \n \n \n \n\n\n \n Farmani, M.; Heusdens, R.; Pedersen, M. S.; and Jensen, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 617-621, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"TDOA-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760322,\n  author = {M. Farmani and R. Heusdens and M. S. Pedersen and J. Jensen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {TDOA-based self-calibration of dual-microphone arrays},\n  year = {2016},\n  pages = {617-621},\n  abstract = {We consider the problem of determining the relative position of dual-microphone sub-arrays. The proposed solution is mainly developed for binaural hearing aid systems (HASs), where each hearing aid (HA) in the HAS has two microphones at a known distance from each other. However, the proposed algorithm can effortlessly be applied to acoustic sensor network applications. In contrast to most state-of-the-art calibration algorithms, which model the calibration problem as a non-linear problem resulting in high computational complexity, we model the calibration problem as a simple linear system of equations by utilizing a far-field assumption. The proposed model is based on target signals time-difference-of-arrivals (TDOAs) between the HAS microphones. Working with TDOAs avoids clock synchronization between sound sources and microphones, and target signals need not be known beforehand. To solve the calibration problem, we propose a least squares estimator which is simple and does not need any probabilistic assumptions about the observed signals.},\n  keywords = {calibration;computational complexity;estimation theory;hearing aids;least squares approximations;microphone arrays;position measurement;probability;TDOA-based self-calibration algorithm;relative position determination;dual-microphone subarray;binaural hearing aid system;acoustic sensor network application;nonlinear problem;computational complexity;far-field assumption;time-difference-of-arrival algorithm;HAS microphone;clock synchronization;sound source;least square estimator;probabilistic assumption;Calibration;Microphone arrays;Hearing aids;Signal processing algorithms;Direction-of-arrival estimation;Estimation;Microphone array calibration;hearing aid;DOA;TDOA;far-field},\n  doi = {10.1109/EUSIPCO.2016.7760322},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252169.pdf},\n}\n\n
\n
\n\n\n
\n We consider the problem of determining the relative position of dual-microphone sub-arrays. The proposed solution is mainly developed for binaural hearing aid systems (HASs), where each hearing aid (HA) in the HAS has two microphones at a known distance from each other. However, the proposed algorithm can effortlessly be applied to acoustic sensor network applications. In contrast to most state-of-the-art calibration algorithms, which model the calibration problem as a non-linear problem resulting in high computational complexity, we model the calibration problem as a simple linear system of equations by utilizing a far-field assumption. The proposed model is based on target signals time-difference-of-arrivals (TDOAs) between the HAS microphones. Working with TDOAs avoids clock synchronization between sound sources and microphones, and target signals need not be known beforehand. To solve the calibration problem, we propose a least squares estimator which is simple and does not need any probabilistic assumptions about the observed signals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Direction of arrival estimation in front of a reflective plane using a circular microphone array.\n \n \n \n\n\n \n Stefanakis, N.; and Mouchtaris, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 622-626, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760323,\n  author = {N. Stefanakis and A. Mouchtaris},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Direction of arrival estimation in front of a reflective plane using a circular microphone array},\n  year = {2016},\n  pages = {622-626},\n  abstract = {The presence of reflecting surfaces inside an enclosure is generally known to have an adverse effect in acoustic source localization and Direction of Arrival (DOA) estimation performance. In this paper, we focus on the problem of indoor multi-source DOA estimation along the horizontal plane, considering a circular sensor array which is placed just in front of one of the vertical walls of the room. We present a modification in the propagation model, which traditionally accounts for the direct path only, by incorporating also the contribution of the earliest reflection introduced by the adjacent vertical wall. Based on the traditional and the modified model, a Matched Filter and a Minimum Variance Distortionless Response beamformer are designed and tested for DOA estimation. Results with simulated and real data demonstrate the validity of the proposed model and its superiority in comparison to the traditional one.},\n  keywords = {acoustic signal processing;array signal processing;direction-of-arrival estimation;matched filters;microphone arrays;sensor arrays;direction of arrival estimation;reflective plane;circular microphone array;acoustic source localization;multisource DOA estimation;horizontal plane;circular sensor array;propagation model;matched filter;minimum variance distortionless response beamformer;Direction-of-arrival estimation;Estimation;Sensor arrays;Time-frequency analysis;Histograms;Acoustics;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760323},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n The presence of reflecting surfaces inside an enclosure is generally known to have an adverse effect in acoustic source localization and Direction of Arrival (DOA) estimation performance. In this paper, we focus on the problem of indoor multi-source DOA estimation along the horizontal plane, considering a circular sensor array which is placed just in front of one of the vertical walls of the room. We present a modification in the propagation model, which traditionally accounts for the direct path only, by incorporating also the contribution of the earliest reflection introduced by the adjacent vertical wall. Based on the traditional and the modified model, a Matched Filter and a Minimum Variance Distortionless Response beamformer are designed and tested for DOA estimation. Results with simulated and real data demonstrate the validity of the proposed model and its superiority in comparison to the traditional one.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n DSP based OFDM receiver for time-varying underwater acoustic channels.\n \n \n \n \n\n\n \n Peng, B.; Rossi, P. S.; Dong, H.; and Kansanen, K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 627-631, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DSPPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760324,\n  author = {B. Peng and P. S. Rossi and H. Dong and K. Kansanen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {DSP based OFDM receiver for time-varying underwater acoustic channels},\n  year = {2016},\n  pages = {627-631},\n  abstract = {This paper presents a digital signal processor (DSP) based iterative orthogonal frequency division multiplexing (OFDM) receiver for time-varying underwater acoustic channels. The time-varying channel is modelled by basis expansion model (BEM). To explore the inherent sparsity of underwater acoustic channel, orthogonal matching pursuit (OMP) algorithm is adopted. A fast Fourier transform (FFT) based low-complexity implementation of OMP algorithm is employed to reduce computational complexity and save memory space. The receiving system is implemented on a multi-core DSP evaluation module TMDX-EVM6678L. The performance of proposed receiving system is validated through experimental data collected from real world. With two cores running at 1 GHz, the real-time processing is achieved and the system performance is satisfying.},\n  keywords = {iterative methods;OFDM modulation;radio receivers;time-varying channels;underwater acoustic communication;wireless channels;DSP based OFDM receiver;time-varying underwater acoustic channels;digital signal processor based iterative orthogonal frequency division multiplexing;time-varying underwater acoustic channels;basis expansion model;time-varying channel;orthogonal matching pursuit;fast Fourier transform;FFT based low-complexity;OMP algorithm;computational complexity;multicore DSP evaluation module;TMDX-EVM6678L;system performance;frequency 1 GHz;Digital signal processing;OFDM;Receivers;Channel estimation;Random access memory;Underwater acoustics;Matching pursuit algorithms;Underwater acoustic communication;DSP;Iterative receiver;orthogonal frequency division multiplexing (OFDM)},\n  doi = {10.1109/EUSIPCO.2016.7760324},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251023.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a digital signal processor (DSP) based iterative orthogonal frequency division multiplexing (OFDM) receiver for time-varying underwater acoustic channels. The time-varying channel is modelled by basis expansion model (BEM). To explore the inherent sparsity of underwater acoustic channel, orthogonal matching pursuit (OMP) algorithm is adopted. A fast Fourier transform (FFT) based low-complexity implementation of OMP algorithm is employed to reduce computational complexity and save memory space. The receiving system is implemented on a multi-core DSP evaluation module TMDX-EVM6678L. The performance of proposed receiving system is validated through experimental data collected from real world. With two cores running at 1 GHz, the real-time processing is achieved and the system performance is satisfying.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n New evaluation scheme for software function approximation with non-uniform segmentation.\n \n \n \n \n\n\n \n Bonnot, J.; Nogues, E.; and Menard, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 632-636, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"NewPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760325,\n  author = {J. Bonnot and E. Nogues and D. Menard},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {New evaluation scheme for software function approximation with non-uniform segmentation},\n  year = {2016},\n  pages = {632-636},\n  abstract = {Modern applications embed complex mathematical processing based on composition of elementary functions. A good balance between approximation accuracy, and implementation cost, i.e. memory space requirement and computation time, is needed to design an efficient implementation. From this point of view, approaches working with polynomial approximation obtain results of a monitored accuracy with a moderate implementation cost. For software implementation in fixed-point processors, accurate results can be obtained if the segment on which the function is computed I is segmented accurately enough, to have an approximating polynomial on each segment. Non-uniform segmentation is required to limit the number of segments and then the implementation cost. The proposed recursive scheme exploits the trade-off between memory requirement and evaluation time. The method is illustrated with the function exp(-√(x)) on the segment [2-6; 25] and showed a mean speed-up ratio of 98.7 compared to the mathematical C standard library on the Digital Signal Processor C55x.},\n  keywords = {digital signal processing chips;polynomial approximation;software function approximation;nonuniform segmentation;complex mathematical processing;elementary functions;polynomial approximation;fixed-point processors;Digital Signal Processor C55x;Approximation algorithms;Hardware;Software;Memory management;Approximation error;Indexing},\n  doi = {10.1109/EUSIPCO.2016.7760325},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252091.pdf},\n}\n\n
\n
\n\n\n
\n Modern applications embed complex mathematical processing based on composition of elementary functions. A good balance between approximation accuracy, and implementation cost, i.e. memory space requirement and computation time, is needed to design an efficient implementation. From this point of view, approaches working with polynomial approximation obtain results of a monitored accuracy with a moderate implementation cost. For software implementation in fixed-point processors, accurate results can be obtained if the segment on which the function is computed I is segmented accurately enough, to have an approximating polynomial on each segment. Non-uniform segmentation is required to limit the number of segments and then the implementation cost. The proposed recursive scheme exploits the trade-off between memory requirement and evaluation time. The method is illustrated with the function exp(-√(x)) on the segment [2-6; 25] and showed a mean speed-up ratio of 98.7 compared to the mathematical C standard library on the Digital Signal Processor C55x.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimization of parallel processing intensive digital front-end for IEEE 802.11ac receiver.\n \n \n \n \n\n\n \n Yli-Kaakinen, J.; Levanen, T.; Aghababaeetafreshi, M.; Renfors, M.; and Valkama, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 637-641, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OptimizationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760326,\n  author = {J. Yli-Kaakinen and T. Levanen and M. Aghababaeetafreshi and M. Renfors and M. Valkama},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Optimization of parallel processing intensive digital front-end for IEEE 802.11ac receiver},\n  year = {2016},\n  pages = {637-641},\n  abstract = {Modern computing platforms offer increasing levels of parallelism for the fast execution of different signal processing tasks. In this paper a digital front-end concept is developed, where the parallel processing is utilized for dividing the inherent structure of IEEE 802.11ac waveform to two or more parallel signals and by processing the resulting signals further e.g, using legacy IEEE 802.11n digital receiver chains. Two multirate channelization architectures are developed with the corresponding filter coefficient optimization. The full radio link performance simulations with commonly adopted indoor WiFi channel profiles are provided, verifying the overall link performance with the proposed channelization architectures.},\n  keywords = {parallel processing;radio receivers;wireless LAN;IEEE 802.11ac receiver;parallel processing intensive digital front-end;computing platforms;signal processing tasks;inherent structure;IEEE 802.11ac waveform;parallel signals;IEEE 802.11n digital receiver chains;multirate channelization architectures;filter coefficient optimization;radio link performance simulations;indoor WiFi channel profiles;Optimization;Equalizers;Passband;Computer architecture;Convolution;Complexity theory},\n  doi = {10.1109/EUSIPCO.2016.7760326},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252123.pdf},\n}\n\n
\n
\n\n\n
\n Modern computing platforms offer increasing levels of parallelism for the fast execution of different signal processing tasks. In this paper a digital front-end concept is developed, where the parallel processing is utilized for dividing the inherent structure of IEEE 802.11ac waveform to two or more parallel signals and by processing the resulting signals further e.g, using legacy IEEE 802.11n digital receiver chains. Two multirate channelization architectures are developed with the corresponding filter coefficient optimization. The full radio link performance simulations with commonly adopted indoor WiFi channel profiles are provided, verifying the overall link performance with the proposed channelization architectures.\n
\n\n\n
\n\n\n \n\n\n
\n \n\n \n \n \n \n \n \n FPGA implementation of a cyclostationary detector for OFDM signals.\n \n \n \n \n\n\n \n Allan, D.; Crockett, L.; Weiss, S.; Stuart, K.; and Stewart, R. W.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 647-651, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"FPGAPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760328,\n  author = {D. Allan and L. Crockett and S. Weiss and K. Stuart and R. W. Stewart},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {FPGA implementation of a cyclostationary detector for OFDM signals},\n  year = {2016},\n  pages = {647-651},\n  abstract = {Due to the ubiquity of Orthogonal Frequency Division Multiplexing (OFDM) based communications standards such as IEEE 802.11 a/g/n and 3GPP Long Term Evolution (LTE), a growing interest has developed in techniques for reliably detecting the presence of these signals in dynamic radio systems. A popular approach for detection is to exploit the cyclostationary nature of OFDM communications signals. In this paper, we focus on a frequency domain cyclostationary detection algorithm first introduced by Giannakis and Dandawate and study its performance in detecting IEEE 802.11a OFDM signals in the presence of practical radio impairments such as Carrier Frequency offset (CFO), Phase Noise, I/Q Imbalance, Multipath Fading and DC offset. We then present a hardware implementation of this algorithm developed using MathWorks HDL Coder and provide implementation results after targeting to a Xilinx 7 Series FPGA device.},\n  keywords = {field programmable gate arrays;frequency-domain analysis;Long Term Evolution;OFDM modulation;phase noise;wireless LAN;FPGA implementation;cyclostationary detector;OFDM communication signal;orthogonal frequency division multiplexing;IEEE 802.11 a/g/n;3GPP;Long Term Evolution;LTE;dynamic radio system;frequency domain cyclostationary detection algorithm;carrier frequency offset;CFO;phase noise;I/Q imbalance;multipath fading;DC offset;MathWorks HDL coder;Xilinx 7 Series FPGA device;OFDM;Detectors;Signal to noise ratio;Phase noise;Frequency-domain analysis;Correlation;Hardware design languages},\n  doi = {10.1109/EUSIPCO.2016.7760328},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256211.pdf},\n}\n\n
\n
\n\n\n
\n Due to the ubiquity of Orthogonal Frequency Division Multiplexing (OFDM) based communications standards such as IEEE 802.11 a/g/n and 3GPP Long Term Evolution (LTE), a growing interest has developed in techniques for reliably detecting the presence of these signals in dynamic radio systems. A popular approach for detection is to exploit the cyclostationary nature of OFDM communications signals. In this paper, we focus on a frequency domain cyclostationary detection algorithm first introduced by Giannakis and Dandawate and study its performance in detecting IEEE 802.11a OFDM signals in the presence of practical radio impairments such as Carrier Frequency offset (CFO), Phase Noise, I/Q Imbalance, Multipath Fading and DC offset. We then present a hardware implementation of this algorithm developed using MathWorks HDL Coder and provide implementation results after targeting to a Xilinx 7 Series FPGA device.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Investigations into Bluetooth low energy localization precision limits.\n \n \n \n \n\n\n \n Schmalenstroeer, J.; and Haeb-Umbach, R.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 652-656, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"InvestigationsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760329,\n  author = {J. Schmalenstroeer and R. Haeb-Umbach},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Investigations into Bluetooth low energy localization precision limits},\n  year = {2016},\n  pages = {652-656},\n  abstract = {In this paper we study the influence of directional radio patterns of Bluetooth low energy (BLE) beacons on smartphone localization accuracy and beacon network planning. A two-dimensional model of the power emission characteristic is derived from measurements of the radiation pattern of BLE beacons carried out in an RF chamber. The Cramer-Rao lower bound (CRLB) for position estimation is then derived for this directional power emission model. With this lower bound on the RMS positioning error the coverage of different beacon network configurations can be evaluated. For near-optimal network planing an evolutionary optimization algorithm for finding the best beacon placement is presented.},\n  keywords = {Bluetooth;evolutionary computation;telecommunication network planning;Bluetooth low energy localization precision limits;BLE beacons;directional radio patterns;beacon network planning;smartphone localization accuracy;two-dimensional model;power emission characteristic;Cramer-Rao lower bound;CRLB;directional power emission model;beacon network configurations;near-optimal network planing;evolutionary optimization;beacon placement;Signal processing;Power measurement;Antenna radiation patterns;Europe;Bluetooth;Antenna measurements;Planning;BLE;localization precision;CRLB;evolutionary algorithm},\n  doi = {10.1109/EUSIPCO.2016.7760329},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250748.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we study the influence of directional radio patterns of Bluetooth low energy (BLE) beacons on smartphone localization accuracy and beacon network planning. A two-dimensional model of the power emission characteristic is derived from measurements of the radiation pattern of BLE beacons carried out in an RF chamber. The Cramer-Rao lower bound (CRLB) for position estimation is then derived for this directional power emission model. With this lower bound on the RMS positioning error the coverage of different beacon network configurations can be evaluated. For near-optimal network planing an evolutionary optimization algorithm for finding the best beacon placement is presented.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low complexity FRI based sampling scheme for UWB channel estimation.\n \n \n \n \n\n\n \n Yaacoub, T.; Youssef, R.; Radoi, E.; and Burel, G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 657-661, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"LowPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760330,\n  author = {T. Yaacoub and R. Youssef and E. Radoi and G. Burel},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Low complexity FRI based sampling scheme for UWB channel estimation},\n  year = {2016},\n  pages = {657-661},\n  abstract = {In this paper, we propose a low complexity multichannel scheme for Ultra Wideband (UWB) channel impulse response estimation. It is mainly based on the finite rate of innovation (FRI) characteristic of UWB channel impulse response, which allows for a sampling frequency much lower than the Nyquist limit. Since the UWB channel is rich in multipaths, the number of samples required results in an unrealistic number of processing channels. Our approach removes this drawback at the price of a moderate increase of the number of pilot pulses. Compared to other schemes presented in the literature, the one proposed in this paper allows reducing the number of processing channels to values appropriate for practical implementation. Moreover, the same approach is used to further reduce the sampling frequency at each channel. The effectiveness of the proposed approach is demonstrated for IEEE 802.15.3a UWB channel estimation in a coherent reception framework.},\n  keywords = {channel estimation;multipath channels;sampling methods;signal sampling;transient response;ultra wideband communication;low-complexity FRI-based sampling scheme;UWB channel impulse response estimation;ultra wideband channel impulse response estimation;finite rate of innovation characteristic;multipath channel;pilot pulse;IEEE 802.15.3a UWB channel estimation;coherent reception framework;Channel estimation;Technological innovation;Europe;Signal processing;Complexity theory;Multipath channels;Ultra wideband technology},\n  doi = {10.1109/EUSIPCO.2016.7760330},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252237.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a low complexity multichannel scheme for Ultra Wideband (UWB) channel impulse response estimation. It is mainly based on the finite rate of innovation (FRI) characteristic of UWB channel impulse response, which allows for a sampling frequency much lower than the Nyquist limit. Since the UWB channel is rich in multipaths, the number of samples required results in an unrealistic number of processing channels. Our approach removes this drawback at the price of a moderate increase of the number of pilot pulses. Compared to other schemes presented in the literature, the one proposed in this paper allows reducing the number of processing channels to values appropriate for practical implementation. Moreover, the same approach is used to further reduce the sampling frequency at each channel. The effectiveness of the proposed approach is demonstrated for IEEE 802.15.3a UWB channel estimation in a coherent reception framework.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic objective thresholding to detect neuronal action potentials.\n \n \n \n \n\n\n \n Tanskanen, J. M. A.; Kapucu, F. E.; Vornanen, I.; and Hyttinen, J. A. K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 662-666, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760331,\n  author = {J. M. A. Tanskanen and F. E. Kapucu and I. Vornanen and J. A. K. Hyttinen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic objective thresholding to detect neuronal action potentials},\n  year = {2016},\n  pages = {662-666},\n  abstract = {In this paper, we introduce a fully objective method to set thresholds (THs) for neuronal action potential spike detection from extracellular field potential signals. Although several more sophisticated methods exist, thresholding is still the most used spike detection method. In general, it is employed by setting a TH as per convention or operator decision, and without considering either the undetected or spurious spikes. Here, we demonstrate with both simulations and real microelectrode measurement data that our method can fully automatically and objectively yield THs comparable to those set by an expert operator. A Matlab function implementation of the method is described, and provided freely in Matlab Central File Exchange.},\n  keywords = {signal processing;automatic objective thresholding;neuronal action potentials detection;set thresholds;extracellular field potential signals;spike detection method;operator decision;real microelectrode measurement data;Matlab central file exchange;MATLAB;Noise measurement;Signal processing algorithms;Histograms;Electric potential;In vivo;Europe;neuronal action potential;thresholding;spike detection;microelectrode array;field potential},\n  doi = {10.1109/EUSIPCO.2016.7760331},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256317.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we introduce a fully objective method to set thresholds (THs) for neuronal action potential spike detection from extracellular field potential signals. Although several more sophisticated methods exist, thresholding is still the most used spike detection method. In general, it is employed by setting a TH as per convention or operator decision, and without considering either the undetected or spurious spikes. Here, we demonstrate with both simulations and real microelectrode measurement data that our method can fully automatically and objectively yield THs comparable to those set by an expert operator. A Matlab function implementation of the method is described, and provided freely in Matlab Central File Exchange.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Mining the bilinear structure of data with approximate joint diagonalization.\n \n \n \n \n\n\n \n Korczowski, L.; Bouchard, F.; Jutten, C.; and Congedo, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 667-671, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MiningPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760332,\n  author = {L. Korczowski and F. Bouchard and C. Jutten and M. Congedo},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Mining the bilinear structure of data with approximate joint diagonalization},\n  year = {2016},\n  pages = {667-671},\n  abstract = {Approximate Joint Diagonalization of a matrix set can solve the linear Blind Source Separation problem. If the data possesses a bilinear structure, for example a spatio-temporal structure, transformations such as tensor decomposition can be applied. In this paper we show how the linear and bilinear joint diagonalization can be applied for extracting sources according to a composite model where some of the sources have a linear structure and other a bilinear structure. This is the case of Event Related Potentials (ERPs). The proposed model achieves higher performance in term of shape and robustness for the estimation of ERP sources in a Brain Computer Interface experiment.},\n  keywords = {blind source separation;bilinear data structure;approximate joint diagonalization;linear blind source separation;tensor decomposition;event related potentials;ERP sources;brain computer interface;Brain modeling;Covariance matrices;Estimation;Cost function;Transmission line matrix methods;Jacobian matrices;Data models},\n  doi = {10.1109/EUSIPCO.2016.7760332},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256397.pdf},\n}\n\n
\n
\n\n\n
\n Approximate Joint Diagonalization of a matrix set can solve the linear Blind Source Separation problem. If the data possesses a bilinear structure, for example a spatio-temporal structure, transformations such as tensor decomposition can be applied. In this paper we show how the linear and bilinear joint diagonalization can be applied for extracting sources according to a composite model where some of the sources have a linear structure and other a bilinear structure. This is the case of Event Related Potentials (ERPs). The proposed model achieves higher performance in term of shape and robustness for the estimation of ERP sources in a Brain Computer Interface experiment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On high-accuracy direct digital frequency synthesis using linear function approximation.\n \n \n \n \n\n\n \n Rust, J.; Bärthel, M.; and Paul, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 672-676, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760333,\n  author = {J. Rust and M. Bärthel and S. Paul},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On high-accuracy direct digital frequency synthesis using linear function approximation},\n  year = {2016},\n  pages = {672-676},\n  abstract = {This paper describes a novel design method for direct digital frequency synthesizers with very high accuracy. To this end, we leverage the well-known piecewise, multiplier-less function approximation method by two separate enhancements: A parallel function estimation scheme is applied which increases the approximation accuracy and reduces the segmentation effort. To achieve further performance improvement, gradient encoding is also taken into account. For evaluation, several direct digital frequency synthesizer architectures with varying accuracies are generated and analyzed in terms of complexity and timing. Logic and physical synthesis is performed with selected candidates. The results indicate the proposed function approximation technique as a powerful approach for the design of direct digital frequency synthesizers with spurious free dynamic ranges of 90dBc and more.},\n  keywords = {direct digital synthesis;function approximation;logic design;performance evaluation;spurious free dynamic ranges;physical synthesis;logic synthesis;gradient encoding;performance improvement;approximation accuracy;parallel function estimation;piecewise multiplier-less function approximation;linear function approximation;high-accuracy direct digital frequency synthesis;Function approximation;Multiplexing;Hardware;Signal processing;Frequency synthesizers;Estimation;Complexity theory;Direct Digital Frequency Synthesis;Advanced Linear Function Approximation;Elementary Functions},\n  doi = {10.1109/EUSIPCO.2016.7760333},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570254555.pdf},\n}\n\n
\n
\n\n\n
\n This paper describes a novel design method for direct digital frequency synthesizers with very high accuracy. To this end, we leverage the well-known piecewise, multiplier-less function approximation method by two separate enhancements: A parallel function estimation scheme is applied which increases the approximation accuracy and reduces the segmentation effort. To achieve further performance improvement, gradient encoding is also taken into account. For evaluation, several direct digital frequency synthesizer architectures with varying accuracies are generated and analyzed in terms of complexity and timing. Logic and physical synthesis is performed with selected candidates. The results indicate the proposed function approximation technique as a powerful approach for the design of direct digital frequency synthesizers with spurious free dynamic ranges of 90dBc and more.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nuclear norm regularized robust dictionary learning for energy disaggregation.\n \n \n \n \n\n\n \n Gupta, M.; and Majumdar, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 677-681, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"NuclearPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760334,\n  author = {M. Gupta and A. Majumdar},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Nuclear norm regularized robust dictionary learning for energy disaggregation},\n  year = {2016},\n  pages = {677-681},\n  abstract = {The goal of this work is energy disaggregation. A recent work showed that instead of employing the usual Euclidean norm cost function for dictionary learning, better results can be achieved by learning the dictionaries in a robust fashion by employing an l1-norm cost function; this is because energy data is corrupted by large but sparse outliers. In this work we propose to improve the robust dictionary learning approach by imposing low-rank penalty on the learned coefficients. The ensuing formulation is solved using a combination of Split Bregman and Majorization Minimization approach. Experiments on the REDD dataset reveal that our proposed method yields better results than both the robust dictionary learning technique and the recently published work on powerlet energy disaggregation.},\n  keywords = {minimax techniques;power consumption;power engineering computing;signal processing;nuclear norm regularized robust dictionary learning;energy disaggregation;l1-norm cost function;energy data;sparse outliers;robust dictionary learning;low-rank penalty;Split Bregman approach;majorization minimization;REDD dataset;Home appliances;Dictionaries;Robustness;Load modeling;Training;Cost function;Minimization;Energy Disaggregation;Dictionary Learning;Robust Learning},\n  doi = {10.1109/EUSIPCO.2016.7760334},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255295.pdf},\n}\n\n
\n
\n\n\n
\n The goal of this work is energy disaggregation. A recent work showed that instead of employing the usual Euclidean norm cost function for dictionary learning, better results can be achieved by learning the dictionaries in a robust fashion by employing an l1-norm cost function; this is because energy data is corrupted by large but sparse outliers. In this work we propose to improve the robust dictionary learning approach by imposing low-rank penalty on the learned coefficients. The ensuing formulation is solved using a combination of Split Bregman and Majorization Minimization approach. Experiments on the REDD dataset reveal that our proposed method yields better results than both the robust dictionary learning technique and the recently published work on powerlet energy disaggregation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A new approach for multi-dimensional signal processing and modelling for signals from gel electrophoresis.\n \n \n \n \n\n\n \n Yousif, E. H. G.; Hopgood, J. R.; Thompson, J. S.; and Davies, M. E.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 682-686, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760335,\n  author = {E. H. G. Yousif and J. R. Hopgood and J. S. Thompson and M. E. Davies},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A new approach for multi-dimensional signal processing and modelling for signals from gel electrophoresis},\n  year = {2016},\n  pages = {682-686},\n  abstract = {In this paper, a multi-channel multi-dimensional approach is investigated for modeling of signals obtained from DNA gel electrophoresis. Related applications include DNA fingerprinting and crime scene investigations. In order to improve resolution and accuracy of modeling, a novel approach is employed based on using equidistant multi-capture data frames obtained over an extended span of time. The multidimensional signal is rescaled and aligned which improves resolution, then the signal is modeled as a surface that varies with both the time index and separation size. The overall approach is tested on a number of datasets. The simulation results show that the proposed approach can be used as a starting multi-dimensional time series model for raw signals obtained from gel electrophoresis.},\n  keywords = {DNA;electrophoresis;fingerprint identification;medical signal processing;time series;multidimensional signal processing;multidimensional signal modelling;multichannel multidimensional approach;DNA gel electrophoresis;DNA fingerprinting;crime scene;equidistant multicapture data frames;resolution improvement;time index;separation size;multidimensional time series model;deoxyribonucleic acid;DNA;Signal processing;Indexes;Time series analysis;Mathematical model;Noise measurement;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760335},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255904.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a multi-channel multi-dimensional approach is investigated for modeling of signals obtained from DNA gel electrophoresis. Related applications include DNA fingerprinting and crime scene investigations. In order to improve resolution and accuracy of modeling, a novel approach is employed based on using equidistant multi-capture data frames obtained over an extended span of time. The multidimensional signal is rescaled and aligned which improves resolution, then the signal is modeled as a surface that varies with both the time index and separation size. The overall approach is tested on a number of datasets. The simulation results show that the proposed approach can be used as a starting multi-dimensional time series model for raw signals obtained from gel electrophoresis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The data-centre whisperer: Relative attribute usage estimation for cloud servers.\n \n \n \n \n\n\n \n de Fréin , R.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 687-691, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760336,\n  author = {R. {de Fréin}},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {The data-centre whisperer: Relative attribute usage estimation for cloud servers},\n  year = {2016},\n  pages = {687-691},\n  abstract = {We show that the relative usage of the different attributes of a cloud server can be estimated under time-varying loads. We demonstrate the effectiveness of these estimators by determining how user requests for video-from a video server-affects its usage. Relative Attribute Usage (RAU) estimators are designed by (1) formulating a generative model for the server attributes; (2) using the fact that the load signal has compact support compared to non-idealities in the server's behaviour in the time-frequency domain; and (3) using power-weighting to refine the estimates. The resulting estimators have low complexity. This motivates their candidacy when attribute usage estimates are required for run-time outage diagnosis routines, a task which is commonly referred to as “data-centre whispering”. We demonstrate the application of these estimators on a Cloud-testbed in three practical scenarios, when the server is under a (1) periodic, (2) step-increasing and (3) flash-crowd load.},\n  keywords = {cloud computing;computer centres;power aware computing;time-frequency analysis;data-centre whisperer;relative attribute usage estimation;cloud servers;time-varying loads;video server;RAU estimation;load signal;server behaviour;time-frequency domain;Servers;Sockets;Load modeling;Estimation;Cloud computing;Indexes;Discrete Fourier transforms;Power-weighted estimators;Blind Source Separation;Video-on-Demand},\n  doi = {10.1109/EUSIPCO.2016.7760336},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256133.pdf},\n}\n\n
\n
\n\n\n
\n We show that the relative usage of the different attributes of a cloud server can be estimated under time-varying loads. We demonstrate the effectiveness of these estimators by determining how user requests for video-from a video server-affects its usage. Relative Attribute Usage (RAU) estimators are designed by (1) formulating a generative model for the server attributes; (2) using the fact that the load signal has compact support compared to non-idealities in the server's behaviour in the time-frequency domain; and (3) using power-weighting to refine the estimates. The resulting estimators have low complexity. This motivates their candidacy when attribute usage estimates are required for run-time outage diagnosis routines, a task which is commonly referred to as “data-centre whispering”. We demonstrate the application of these estimators on a Cloud-testbed in three practical scenarios, when the server is under a (1) periodic, (2) step-increasing and (3) flash-crowd load.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Localization of radar emitters from a single sensor using multipath and TDOA-AOA measurements in a naval context.\n \n \n \n \n\n\n \n Giacometti, R.; Baussard, A.; Jahan, D.; Cornu, C.; Khenchaf, A.; and Quellec, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 692-696, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"LocalizationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760337,\n  author = {R. Giacometti and A. Baussard and D. Jahan and C. Cornu and A. Khenchaf and J. Quellec},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Localization of radar emitters from a single sensor using multipath and TDOA-AOA measurements in a naval context},\n  year = {2016},\n  pages = {692-696},\n  abstract = {This paper investigates the problem of source location from a single sensor using direct and indirect signals in a maritime context. To localize the emitters, we propose to exploit time difference of arrival (TDOA) and angle of arrival (AOA) measurements. The proposed approach uses any a priori knowledge of the reflectors positions. However, in practice, it is necessary to solve an assignment problem. It consists in associating each pair of TDOA-AOA measurement to a given reflector, this pair being already associated to a given emitter. In order to show the potential of our approach, numerical results using simulated and real data are proposed.},\n  keywords = {direction-of-arrival estimation;marine navigation;marine radar;numerical analysis;radar emitters localization;time difference of arrival measurements;angle of arrival measurements;TDOA-AOA measurements;source location;single sensor;maritime context;Receivers;Signal processing algorithms;Context;Position measurement;Europe;Signal processing;Estimation;Source localization;single sensor;multipath signals;TDOA;AOA;assignment problem;data association},\n  doi = {10.1109/EUSIPCO.2016.7760337},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256183.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates the problem of source location from a single sensor using direct and indirect signals in a maritime context. To localize the emitters, we propose to exploit time difference of arrival (TDOA) and angle of arrival (AOA) measurements. The proposed approach uses any a priori knowledge of the reflectors positions. However, in practice, it is necessary to solve an assignment problem. It consists in associating each pair of TDOA-AOA measurement to a given reflector, this pair being already associated to a given emitter. In order to show the potential of our approach, numerical results using simulated and real data are proposed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modified tone reservation for PAPR reduction in OFDM systems.\n \n \n \n \n\n\n \n Diallo, M. L.; Chafii, M.; Palicot, J.; and Bader, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 697-701, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ModifiedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760338,\n  author = {M. L. Diallo and M. Chafii and J. Palicot and F. Bader},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Modified tone reservation for PAPR reduction in OFDM systems},\n  year = {2016},\n  pages = {697-701},\n  abstract = {One of the main drawbacks of orthogonal frequency division multiplex modulation is its high peak-to-average power ratio (PAPR) which can induce poor power efficiency at high power amplifier. Tone reservation (TR) is the most popular PAPR mitigation technique that uses a set of reserved tones to design peak cancelling signal for PAPR reduction. Finding an effective peak cancelling for PAPR reduction in the time domain by using only a small number of reserved tones, is not straightforward. Therefore, we are led to a trade-off between computational complexity and PAPR reduction. The TR method based on the gradient projection algorithm gives the best compromise. In this paper, we propose to modify the classical TR structure. The new proposed method achieves an improvement up to 1.2 dB in terms of PAPR performance without increasing the complexity. The effectiveness of this solution is confirmed through theoretical analysis and simulation results.},\n  keywords = {computational complexity;gradient methods;OFDM modulation;power amplifiers;time-domain analysis;modified tone reservation;PAPR reduction;OFDM systems;orthogonal frequency division multiplex modulation;peak-to-average power ratio;power efficiency;power amplifier;PAPR mitigation;peak cancelling signal design;time domain;computational complexity;TR method;gradient projection algorithm;Peak to average power ratio;Frequency modulation;Complexity theory;Matrix decomposition;Optimization;Frequency-domain analysis;OFDM;PAPR;Tone Reservation;CCDF},\n  doi = {10.1109/EUSIPCO.2016.7760338},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256238.pdf},\n}\n\n
\n
\n\n\n
\n One of the main drawbacks of orthogonal frequency division multiplex modulation is its high peak-to-average power ratio (PAPR) which can induce poor power efficiency at high power amplifier. Tone reservation (TR) is the most popular PAPR mitigation technique that uses a set of reserved tones to design peak cancelling signal for PAPR reduction. Finding an effective peak cancelling for PAPR reduction in the time domain by using only a small number of reserved tones, is not straightforward. Therefore, we are led to a trade-off between computational complexity and PAPR reduction. The TR method based on the gradient projection algorithm gives the best compromise. In this paper, we propose to modify the classical TR structure. The new proposed method achieves an improvement up to 1.2 dB in terms of PAPR performance without increasing the complexity. The effectiveness of this solution is confirmed through theoretical analysis and simulation results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Maximum likelihood estimation of a low-order building model.\n \n \n \n \n\n\n \n Nabil, T.; Moulines, E.; Roueff, F.; Jicquel, J.; and Girard, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 702-707, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MaximumPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760339,\n  author = {T. Nabil and E. Moulines and F. Roueff and J. Jicquel and A. Girard},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Maximum likelihood estimation of a low-order building model},\n  year = {2016},\n  pages = {702-707},\n  abstract = {The aim of this paper is to investigate the accuracy of the estimates learned with an open loop model of a building whereas the data is actually collected in closed loop, which corresponds to the true exploitation of buildings. We propose a simple model based on an equivalent RC network whose parameters are physically interpretable. We also describe the maximum likelihood estimation of these parameters by the EM algorithm, and derive their statistical properties. The numerical experiments clearly show the potential of the method, in terms of accuracy and robustness. We emphasize the fact that the estimations are linked to the generating process for the observations, which includes the command system. For instance, the features of the building are correctly estimated if there is a significant gap between the heating and cooling setpoint.},\n  keywords = {building management systems;maximum likelihood estimation;open loop systems;maximum likelihood estimation;low-order building model;open loop model;RC network;EM algorithm;command system;cooling setpoint;heating setpoint;Buildings;Computational modeling;Signal processing algorithms;Estimation;Mathematical model;Europe;Zirconium},\n  doi = {10.1109/EUSIPCO.2016.7760339},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256270.pdf},\n}\n\n
\n
\n\n\n
\n The aim of this paper is to investigate the accuracy of the estimates learned with an open loop model of a building whereas the data is actually collected in closed loop, which corresponds to the true exploitation of buildings. We propose a simple model based on an equivalent RC network whose parameters are physically interpretable. We also describe the maximum likelihood estimation of these parameters by the EM algorithm, and derive their statistical properties. The numerical experiments clearly show the potential of the method, in terms of accuracy and robustness. We emphasize the fact that the estimations are linked to the generating process for the observations, which includes the command system. For instance, the features of the building are correctly estimated if there is a significant gap between the heating and cooling setpoint.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Brain and music: Music genre classification using brain signals.\n \n \n \n \n\n\n \n Ghaemmaghami, P.; and Sebe, N.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 708-712, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"BrainPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760340,\n  author = {P. Ghaemmaghami and N. Sebe},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Brain and music: Music genre classification using brain signals},\n  year = {2016},\n  pages = {708-712},\n  abstract = {Genre classification can be considered as an essential part of music and movie recommender systems. So far, various automatic music genre classification methods have been proposed based on various audio features. However, such content-centric features are not capable of capturing the personal preferences of the listener. In this study, we provide preliminary experimental evidence for the possibility of the music genre classification based on the brain recorded signals of individuals. The brain decoding paradigm is employed to classify recorded brain signals into two broad genre classes: Pop and Rock. We compare the performance of our proposed paradigm on two neuroimaging datasets that contains the electroencephalographic (EEG) and the magnetoencephalographic (MEG) data of subjects who watched 40 music video clips. Our results indicate that the genre of the music clips can be retrieved significantly over the chancelevel using the brain signals. Our study provides a primary step towards user-centric music content retrieval by exploiting brain signals.},\n  keywords = {electroencephalography;magnetoencephalography;medical signal processing;music;signal classification;user-centric music content retrieval;music video clip;MEG data;magnetoencephalographic data;EEG data;electroencephalographic data;neuroimaging dataset;brain signal classification;brain decoding paradigm;content-centric feature;music genre classification method;Electroencephalography;Feature extraction;Multiple signal classification;Decoding;Signal processing;Motion pictures;Sensors;Brain decoding;music genre classification;multimedia content retrieval;brain signal processing;EEG;MEG},\n  doi = {10.1109/EUSIPCO.2016.7760340},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256329.pdf},\n}\n\n
\n
\n\n\n
\n Genre classification can be considered as an essential part of music and movie recommender systems. So far, various automatic music genre classification methods have been proposed based on various audio features. However, such content-centric features are not capable of capturing the personal preferences of the listener. In this study, we provide preliminary experimental evidence for the possibility of the music genre classification based on the brain recorded signals of individuals. The brain decoding paradigm is employed to classify recorded brain signals into two broad genre classes: Pop and Rock. We compare the performance of our proposed paradigm on two neuroimaging datasets that contains the electroencephalographic (EEG) and the magnetoencephalographic (MEG) data of subjects who watched 40 music video clips. Our results indicate that the genre of the music clips can be retrieved significantly over the chancelevel using the brain signals. Our study provides a primary step towards user-centric music content retrieval by exploiting brain signals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint I/Q mixer and filter imbalance compensation and channel equalization with novel preamble design.\n \n \n \n \n\n\n \n Lakshmanan, R.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 713-717, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760341,\n  author = {R. Lakshmanan},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint I/Q mixer and filter imbalance compensation and channel equalization with novel preamble design},\n  year = {2016},\n  pages = {713-717},\n  abstract = {In this paper, we consider the problem of analog signal imbalances between I- and Q-branches of the modulator/demodulator in OFDM systems. A novel compensation approach is proposed to solve this problem using baseband processing techniques, allowing for flexible RF design for high speed wireless systems. In particular, the proposed solution models transmitter and receiver impairments due to mixer and filter imbalances observed through channel related distortions. The solution is obtained by joint estimation and compensation of I/Q imbalance caused by the above impairments. The problem is studied for an OFDM wireless system and an analytical solution is presented in this paper. A novel preamble design is proposed to enable reliable estimation of the solution. Performance evaluations using MATLAB simulations are presented to confirm the effectiveness of the proposed method under various scenarios.},\n  keywords = {equalisers;filtering theory;mixers (circuits);modems;OFDM modulation;radio networks;joint I-Q mixer;filter imbalance compensation;channel equalization;preamble design;analog signal imbalances;modulator-demodulator;baseband processing technique;flexible RF design;high-speed wireless systems;transmitter impairment;receiver impairment;channel-related distortion;OFDM wireless system;Matlab simulation;OFDM;Mixers;Low-pass filters;Receivers;Transmitters;Estimation;Modulation},\n  doi = {10.1109/EUSIPCO.2016.7760341},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256416.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we consider the problem of analog signal imbalances between I- and Q-branches of the modulator/demodulator in OFDM systems. A novel compensation approach is proposed to solve this problem using baseband processing techniques, allowing for flexible RF design for high speed wireless systems. In particular, the proposed solution models transmitter and receiver impairments due to mixer and filter imbalances observed through channel related distortions. The solution is obtained by joint estimation and compensation of I/Q imbalance caused by the above impairments. The problem is studied for an OFDM wireless system and an analytical solution is presented in this paper. A novel preamble design is proposed to enable reliable estimation of the solution. Performance evaluations using MATLAB simulations are presented to confirm the effectiveness of the proposed method under various scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cross-correlation based under-modelled multichannel blind acoustic system identification with sparsity regularization.\n \n \n \n \n\n\n \n Xue, W.; Brookes, M.; and Naylor, P. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 718-722, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Cross-correlationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760342,\n  author = {W. Xue and M. Brookes and P. A. Naylor},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Cross-correlation based under-modelled multichannel blind acoustic system identification with sparsity regularization},\n  year = {2016},\n  pages = {718-722},\n  abstract = {In room acoustics, the under-modelled blind system identification (BSI) problem arises when the identified room impulse response (RIR) is shorter than the real one. Conventional BSI methods can perform poorly under these circumstances. In this paper, we propose an algorithm for multichannel BSI in under-modelled situations. Instead of minimizing the cross-relation error, a new optimization criterion is formulated, which is based on maximizing a cross-correlation criterion. We show that under the statistical model of reverberant signals, the cross-correlation based criterion helps to reduce the adverse effects of system under-modelling on BSI. Moreover, the optimization problem is regularized by including a sparsity term in the cost function. The optimization problem is finally solved based on the split Bregman method in the least-mean-square (LMS) framework. Experimental results show that the proposed method can perform effectively in the under-modelled situations in which conventional methods fail.},\n  keywords = {least mean squares methods;microphones;optimisation;transient response;cross-correlation based under-modelled multichannel blind acoustic system identification;sparsity regularization;room impulse response;BSI methods;multichannel BSI;cross-correlation criterion;reverberant signals;cross-correlation based criterion;optimization problem;split Bregman method;least-mean-square framework;Optimization;Microphones;Correlation;Signal processing algorithms;Reverberation;Europe;Signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760342},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251262.pdf},\n}\n\n
\n
\n\n\n
\n In room acoustics, the under-modelled blind system identification (BSI) problem arises when the identified room impulse response (RIR) is shorter than the real one. Conventional BSI methods can perform poorly under these circumstances. In this paper, we propose an algorithm for multichannel BSI in under-modelled situations. Instead of minimizing the cross-relation error, a new optimization criterion is formulated, which is based on maximizing a cross-correlation criterion. We show that under the statistical model of reverberant signals, the cross-correlation based criterion helps to reduce the adverse effects of system under-modelling on BSI. Moreover, the optimization problem is regularized by including a sparsity term in the cost function. The optimization problem is finally solved based on the split Bregman method in the least-mean-square (LMS) framework. Experimental results show that the proposed method can perform effectively in the under-modelled situations in which conventional methods fail.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Variational Bayesian image reconstruction with an uncertainty model for measurement localization.\n \n \n \n \n\n\n \n Sroubek, F.; Soukup, J.; and Zitová, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 723-727, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"VariationalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760343,\n  author = {F. Sroubek and J. Soukup and B. Zitová},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Variational Bayesian image reconstruction with an uncertainty model for measurement localization},\n  year = {2016},\n  pages = {723-727},\n  abstract = {We propose a general data acquisition model with volatile random displacement of measured samples. Discrepancies between recorded and true positions of the original data is due to the nature of measured data or the acquisition device itself. A reconstruction method based on the Variational Bayesian inference is proposed, which estimates the original data from samples acquired with the acquisition model, and its relation to Jensen's inequality is discussed. A model variant of 2D image reconstruction is analyzed in detail. Further, we outline a relation between the proposed method and the classic deconvolution problem, and illustrate superiority of the Variational Bayesian approach in the case of small number of samples.},\n  keywords = {Bayes methods;data acquisition;deconvolution;image reconstruction;inference mechanisms;variational techniques;uncertainty model;measurement localization;data acquisition model;variational Bayesian inference image reconstruction;Jensen inequality;2D image reconstruction model variant;classic deconvolution problem;Two dimensional displays;Image reconstruction;Mathematical model;Position measurement;Uncertainty;Bayes methods;Displacement measurement},\n  doi = {10.1109/EUSIPCO.2016.7760343},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255930.pdf},\n}\n\n
\n
\n\n\n
\n We propose a general data acquisition model with volatile random displacement of measured samples. Discrepancies between recorded and true positions of the original data is due to the nature of measured data or the acquisition device itself. A reconstruction method based on the Variational Bayesian inference is proposed, which estimates the original data from samples acquired with the acquisition model, and its relation to Jensen's inequality is discussed. A model variant of 2D image reconstruction is analyzed in detail. Further, we outline a relation between the proposed method and the classic deconvolution problem, and illustrate superiority of the Variational Bayesian approach in the case of small number of samples.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Two distributed algorithms for the deconvolution of large radio-interferometric multispectral images.\n \n \n \n \n\n\n \n Meillier, C.; Bianchi, P.; and Hachem, W.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 728-732, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"TwoPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760344,\n  author = {C. Meillier and P. Bianchi and W. Hachem},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Two distributed algorithms for the deconvolution of large radio-interferometric multispectral images},\n  year = {2016},\n  pages = {728-732},\n  abstract = {We address in this paper the deconvolution issue for radio-interferometric multispectral images. Whereas this problem has been widely explored in the recent literature for single images, a few algorithms are able to reconstruct multispectral images (three-dimensional images) [1], [2]. We propose in this paper two new distributed algorithms based on the optimization methods ADMM and projected gradient (PG) for the reconstruction of radio-interferometric multispectral images. We present an original distributed architecture and a comparison of their performance on a quasi-real data cube.},\n  keywords = {deconvolution;distributed algorithms;gradient methods;image reconstruction;optimization method;alternating direction method of multipliers;projected gradient;image reconstruction;large radio-interferometric multispectral images;image deconvolution;distributed algorithms;ADMM;Signal processing algorithms;Deconvolution;Clustering algorithms;Europe;Signal processing;Minimization;Radio frequency;ADMM;deconvolution;distributed optimization;projected gradient;radio-interferometry;multispectral images},\n  doi = {10.1109/EUSIPCO.2016.7760344},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255164.pdf},\n}\n\n
\n
\n\n\n
\n We address in this paper the deconvolution issue for radio-interferometric multispectral images. Whereas this problem has been widely explored in the recent literature for single images, a few algorithms are able to reconstruct multispectral images (three-dimensional images) [1], [2]. We propose in this paper two new distributed algorithms based on the optimization methods ADMM and projected gradient (PG) for the reconstruction of radio-interferometric multispectral images. We present an original distributed architecture and a comparison of their performance on a quasi-real data cube.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bandwidth dependence of the ranging error variance in dense multipath.\n \n \n \n \n\n\n \n Hinteregger, S.; Leitinger, E.; Meissner, P.; Kulmer, J.; and Witrisal, K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 733-737, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"BandwidthPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760345,\n  author = {S. Hinteregger and E. Leitinger and P. Meissner and J. Kulmer and K. Witrisal},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Bandwidth dependence of the ranging error variance in dense multipath},\n  year = {2016},\n  pages = {733-737},\n  abstract = {It is well known that the time-of-flight ranging performance is heavy influenced by multipath propagation within a radio environment. This holds in particular in dense multipath channels as encountered in indoor scenarios. The signal bandwidth has a tremendous influence on this effect, as it determines whether the time resolution is sufficient to resolve the useful line-of-sight (LOS) signal component from interfering multipath. This paper employs a geometry-based stochastic channel model to analyze and characterize the ranging error variance as a function of the bandwidth, covering the narrowband up to the UWB regimes. The Cramér-Rao lower bound (CRLB) is derived for this purpose. It quantifies the impact of bandwidth, SNR, and parameters of the multipath radio channel and can thus be used as an effective and accurate channel model (e.g.) for the cross-layer optimization of positioning systems. Experimental data are analyzed to validate our theoretical results.},\n  keywords = {multipath channels;optimisation;wireless channels;cross-layer optimization;multipath radio channel;Cramer-Rao lower bound;geometry-based stochastic channel model;line-of-sight signal component;dense multipath channels;multipath propagation;ranging error variance;bandwidth dependence;Signal to noise ratio;Interference;Bandwidth;Fading channels;Distortion;Distance measurement;Gain},\n  doi = {10.1109/EUSIPCO.2016.7760345},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255748.pdf},\n}\n\n
\n
\n\n\n
\n It is well known that the time-of-flight ranging performance is heavy influenced by multipath propagation within a radio environment. This holds in particular in dense multipath channels as encountered in indoor scenarios. The signal bandwidth has a tremendous influence on this effect, as it determines whether the time resolution is sufficient to resolve the useful line-of-sight (LOS) signal component from interfering multipath. This paper employs a geometry-based stochastic channel model to analyze and characterize the ranging error variance as a function of the bandwidth, covering the narrowband up to the UWB regimes. The Cramér-Rao lower bound (CRLB) is derived for this purpose. It quantifies the impact of bandwidth, SNR, and parameters of the multipath radio channel and can thus be used as an effective and accurate channel model (e.g.) for the cross-layer optimization of positioning systems. Experimental data are analyzed to validate our theoretical results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Device-free localization of multiple targets.\n \n \n \n \n\n\n \n Nicoli, M.; Rampa, V.; Savazzi, S.; and Schiaroli, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 738-742, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Device-freePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760346,\n  author = {M. Nicoli and V. Rampa and S. Savazzi and S. Schiaroli},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Device-free localization of multiple targets},\n  year = {2016},\n  pages = {738-742},\n  abstract = {In this paper, we consider the problem of multi-target device-free localization with special focus on modeling and inference. The motion of multiple targets inside the area covered by a wireless network leaves a characteristic footprint on the radio-frequency (RF) field, and in turn affects both the average attenuation and the fluctuation of the received signal strength (RSS). A diffraction-based model is developed to describe the impact of multiple targets on the RSS field, i.e. the multi-body-induced shadowing. As a relevant case study, the model is tailored to predict the effects of two co-located targets on the RF signals. Three novel algorithms are proposed for on-line localization, exploiting both the average and the deviation of the body-induced RSS perturbation. The proposed techniques are compared and some preliminary results, based on experimental data collected in a representative indoor environment, are presented.},\n  keywords = {indoor environment;RSSI;signal processing;wireless channels;multiple targets;multitarget device-free localization;wireless network;characteristic footprint;radiofrequency field;average attenuation;received signal strength;diffraction-based model;multibody-induced shadowing;co-located targets;radiofrequency signals;on-line localization;body-induced RSS perturbation;representative indoor environment;Radio frequency;Attenuation;Computational modeling;Analytical models;Shadow mapping;Two dimensional displays;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760346},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255801.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we consider the problem of multi-target device-free localization with special focus on modeling and inference. The motion of multiple targets inside the area covered by a wireless network leaves a characteristic footprint on the radio-frequency (RF) field, and in turn affects both the average attenuation and the fluctuation of the received signal strength (RSS). A diffraction-based model is developed to describe the impact of multiple targets on the RSS field, i.e. the multi-body-induced shadowing. As a relevant case study, the model is tailored to predict the effects of two co-located targets on the RF signals. Three novel algorithms are proposed for on-line localization, exploiting both the average and the deviation of the body-induced RSS perturbation. The proposed techniques are compared and some preliminary results, based on experimental data collected in a representative indoor environment, are presented.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Exact analysis of weighted centroid localization.\n \n \n \n \n\n\n \n Giorgetti, A.; Magowe, K.; and Kandeepan, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 743-747, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ExactPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760347,\n  author = {A. Giorgetti and K. Magowe and S. Kandeepan},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Exact analysis of weighted centroid localization},\n  year = {2016},\n  pages = {743-747},\n  abstract = {Source localization of primary users (PUs) is a geolocation spectrum awareness feature that can be very useful in enhancing the functionality of cognitive radios (CRs). When the cooperating CRs have limited information about the PU, weighted centroid localization (WCL) based on received signal strength (RSS) measurements represents an attractive low-complexity solution. In this paper, we propose a new analytical framework to calculate the exact performance of WCL in the presence of shadowing, based on results of the ratio of two quadratic forms in normal variables. In particular, we derive an exact expression for the root mean square error (RMSE) of the two-dimensional location estimate. Numerical results confirm that the derived framework is able to predict the performance of WCL capturing all the essential aspects of propagation as well as CR network spatial topology.},\n  keywords = {cognitive radio;mean square error methods;RSSI;weighted centroid localization exact analysis;primary users source localization;PU source localization;geolocation spectrum awareness feature;cognitive radio functionality enhancememt;CR functionality enhancement;WCL;received signal strength measurement;RSS measurement;quadratic forms;root mean square error;RMSE;two-dimensional location estimation;CR network spatial topology;Shadow mapping;Europe;Signal processing;Estimation;Root mean square;Simulation;Cognitive radio},\n  doi = {10.1109/EUSIPCO.2016.7760347},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256066.pdf},\n}\n\n
\n
\n\n\n
\n Source localization of primary users (PUs) is a geolocation spectrum awareness feature that can be very useful in enhancing the functionality of cognitive radios (CRs). When the cooperating CRs have limited information about the PU, weighted centroid localization (WCL) based on received signal strength (RSS) measurements represents an attractive low-complexity solution. In this paper, we propose a new analytical framework to calculate the exact performance of WCL in the presence of shadowing, based on results of the ratio of two quadratic forms in normal variables. In particular, we derive an exact expression for the root mean square error (RMSE) of the two-dimensional location estimate. Numerical results confirm that the derived framework is able to predict the performance of WCL capturing all the essential aspects of propagation as well as CR network spatial topology.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the utilization of MIMO-OFDM channel sparsity for accurate positioning.\n \n \n \n \n\n\n \n Saloranta, J.; and Destino, G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 748-752, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760348,\n  author = {J. Saloranta and G. Destino},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On the utilization of MIMO-OFDM channel sparsity for accurate positioning},\n  year = {2016},\n  pages = {748-752},\n  abstract = {Recent results have revealed that MIMO channels at high carrier frequencies exhibit sparsity structure, i.e., a few dominant propagation paths. Also channel parameters, namely angular information and propagation delay can be modelled with the physical location of the transmitter, receiver and scatters. In this paper, we leverage these features into the development of a single base-station localization algorithm, and show that the location of an unknown device can be estimated with an accuracy below a meter based on pilot signalling with a OFDM transmission. The method relies on the utilization of the “Adaptive-LASSO” optimization method, in which an ℓ1-based minimization problem is solved by adapting the sparsifying matrix (dictionary) and the sparse vector jointly. Then the location of the device is estimated from the parameters of the sparsifying matrix. Finally, the positioning method is evaluated in different channel setting utilizing a ray-tracing channel model at 28GHz.},\n  keywords = {microwave propagation;microwave receivers;MIMO communication;OFDM modulation;radio receivers;radio transmitters;sparse matrices;telecommunication signalling;wireless channels;MIMO-OFDM channel sparsity utilization;sparsity structure;angular information delay;angular propagation delay;radio transmitter;radio receiver;scatter device;single base-station localization algorithm;pilot signalling;OFDM transmission;adaptive-LASSO optimization method;ℓ1-based minimization problem;sparse matrix;sparse vector;ray-tracing channel model;frequency 28 GHz;OFDM;Dictionaries;Antennas;Europe;Signal processing;Receivers;Signal processing algorithms},\n  doi = {10.1109/EUSIPCO.2016.7760348},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256127.pdf},\n}\n\n
\n
\n\n\n
\n Recent results have revealed that MIMO channels at high carrier frequencies exhibit sparsity structure, i.e., a few dominant propagation paths. Also channel parameters, namely angular information and propagation delay can be modelled with the physical location of the transmitter, receiver and scatters. In this paper, we leverage these features into the development of a single base-station localization algorithm, and show that the location of an unknown device can be estimated with an accuracy below a meter based on pilot signalling with a OFDM transmission. The method relies on the utilization of the “Adaptive-LASSO” optimization method, in which an ℓ1-based minimization problem is solved by adapting the sparsifying matrix (dictionary) and the sparse vector jointly. Then the location of the device is estimated from the parameters of the sparsifying matrix. Finally, the positioning method is evaluated in different channel setting utilizing a ray-tracing channel model at 28GHz.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multipath components tracking adapted to integrated IR-UWB receivers for improved indoor navigation.\n \n \n \n \n\n\n \n Maceraudi, J.; Dehmas, F.; Denis, B.; and Uguen, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 753-757, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MultipathPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760349,\n  author = {J. Maceraudi and F. Dehmas and B. Denis and B. Uguen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Multipath components tracking adapted to integrated IR-UWB receivers for improved indoor navigation},\n  year = {2016},\n  pages = {753-757},\n  abstract = {In this paper, we propose a solution adapted to Impulse Radio - Ultra Wideband (IR-UWB) integrated receivers devoted to indoor localization, making usage of multipath information under imposed time resolution and low complexity constraints. The Multipath Components (MPCs) can ideally be tracked by capturing their space-time correlation under mobility. The proposed architecture is composed of Multiple Hypothesis Kalman Filters (MHKFs) in parallel, each filter tracking one single MPC while detecting measurement outliers or maneuvering paths. A data association procedure enables to map the MHKFs' outputs onto delays associated with receiver's energy bins. The relative temporal variations of MPCs are used to collectively infer the missing direct path's information in case of Non-Line of Sight (NLoS). The corrected observations finally feed a conventional Extended Kalman Filter (EKF) that estimates mobile's position. We evaluate the performance of the proposed scheme through realistic simulations in terms of both MPC and mobile tracking.},\n  keywords = {indoor navigation;Kalman filters;radio receivers;radionavigation;multipath components tracking;IR-UWB receivers;indoor navigation;impulse radio-ultra wideband integrated receivers;indoor localization;space-time correlation;multiple hypothesis Kalman filters;extended Kalman filter;EKF;mobile tracking;Mobile communication;Delays;Receivers;Channel estimation;Technological innovation;Europe;Signal processing;Data Association;Impulse Radio;Indoor Localization;Integrated Devices;Mulipath Components;Pedestrian Navigation;Time of Arrival;Tracking Filter;Ultra Wideband},\n  doi = {10.1109/EUSIPCO.2016.7760349},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256246.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a solution adapted to Impulse Radio - Ultra Wideband (IR-UWB) integrated receivers devoted to indoor localization, making usage of multipath information under imposed time resolution and low complexity constraints. The Multipath Components (MPCs) can ideally be tracked by capturing their space-time correlation under mobility. The proposed architecture is composed of Multiple Hypothesis Kalman Filters (MHKFs) in parallel, each filter tracking one single MPC while detecting measurement outliers or maneuvering paths. A data association procedure enables to map the MHKFs' outputs onto delays associated with receiver's energy bins. The relative temporal variations of MPCs are used to collectively infer the missing direct path's information in case of Non-Line of Sight (NLoS). The corrected observations finally feed a conventional Extended Kalman Filter (EKF) that estimates mobile's position. We evaluate the performance of the proposed scheme through realistic simulations in terms of both MPC and mobile tracking.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Location information driven formation control for swarm return-to-base application.\n \n \n \n \n\n\n \n Zhang, S.; Raulefs, R.; and Dammann, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 758-763, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"LocationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760350,\n  author = {S. Zhang and R. Raulefs and A. Dammann},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Location information driven formation control for swarm return-to-base application},\n  year = {2016},\n  pages = {758-763},\n  abstract = {On Mars there is no global positioning system available. In this paper we present an analysis of a relative localization system that acts as a moving swarm to estimate the location of its base. Our algorithm jointly processes two objectives. First to shape an optimized swarm structure to estimate the location of the base reliable and second to move the swarm together towards the base to return home. The estimate of the base location is used to return to it by controlled movements considering constraints such as the minimum distance between the swarm elements to avoid collisions. The performance comparison of our location information driven algorithm with goal approaching or flocking algorithm shows a robust behavior with a much higher efficiency.},\n  keywords = {position control;location information driven formation control;swarm return-to-base application;Mars;global positioning system;relative localization system;optimized swarm structure;swarm elements;flocking algorithm;robust behavior;Signal processing algorithms;Collision avoidance;Optimization;Mars;Distance measurement;Europe;Signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760350},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256386.pdf},\n}\n\n
\n
\n\n\n
\n On Mars there is no global positioning system available. In this paper we present an analysis of a relative localization system that acts as a moving swarm to estimate the location of its base. Our algorithm jointly processes two objectives. First to shape an optimized swarm structure to estimate the location of the base reliable and second to move the swarm together towards the base to return home. The estimate of the base location is used to return to it by controlled movements considering constraints such as the minimum distance between the swarm elements to avoid collisions. The performance comparison of our location information driven algorithm with goal approaching or flocking algorithm shows a robust behavior with a much higher efficiency.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the potential of full duplex performance in 5G ultra-dense small cell networks.\n \n \n \n \n\n\n \n Sarret, M. G.; Fleischer, M.; Berardinelli, G.; Mahmood, N. H.; Mogensen, P.; and Heinz, H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 764-768, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760351,\n  author = {M. G. Sarret and M. Fleischer and G. Berardinelli and N. H. Mahmood and P. Mogensen and H. Heinz},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On the potential of full duplex performance in 5G ultra-dense small cell networks},\n  year = {2016},\n  pages = {764-768},\n  abstract = {Full duplex allows a device to transmit and receive simultaneously in the same frequency band, theoretically doubling the throughput compared to traditional half duplex systems. However, several limitations restrict the promised full duplex gain: non-ideal self-interference cancellation, increased inter-cell interference and traffic constraints. In this paper, we first study the self-interference cancellation capabilities by using a real demonstrator. Results show that achieving ~110 dB of cancellation is already possible with the current available technology, thus providing the required level of isolation to build an operational full duplex node. Secondly, we investigate the inter-cell interference and traffic constraints impact on the full duplex performance in 5th generation systems. System level results show that both the traffic and the inter-cell interference can significantly reduce the potential gain of full duplex with respect to half duplex. However, for large traffic asymmetry, full duplex can boost the performance of the lightly loaded link.},\n  keywords = {5G mobile communication;adjacent channel interference;cellular radio;interference suppression;full duplex performance;5G ultradense small cell networks;half-duplex systems;nonideal self-interference cancellation;intercell interference;traffic constraint;self-interference cancellation capability;operational full duplex node;Interference cancellation;High definition video;Gain;5G mobile communication;Receiving antennas},\n  doi = {10.1109/EUSIPCO.2016.7760351},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570253168.pdf},\n}\n\n
\n
\n\n\n
\n Full duplex allows a device to transmit and receive simultaneously in the same frequency band, theoretically doubling the throughput compared to traditional half duplex systems. However, several limitations restrict the promised full duplex gain: non-ideal self-interference cancellation, increased inter-cell interference and traffic constraints. In this paper, we first study the self-interference cancellation capabilities by using a real demonstrator. Results show that achieving  110 dB of cancellation is already possible with the current available technology, thus providing the required level of isolation to build an operational full duplex node. Secondly, we investigate the inter-cell interference and traffic constraints impact on the full duplex performance in 5th generation systems. System level results show that both the traffic and the inter-cell interference can significantly reduce the potential gain of full duplex with respect to half duplex. However, for large traffic asymmetry, full duplex can boost the performance of the lightly loaded link.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Mode selection in multi-user full-duplex systems considering inter-user interference.\n \n \n \n \n\n\n \n Lee, H.; Kim, D.; and Hong, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 769-772, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ModePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760352,\n  author = {H. Lee and D. Kim and D. Hong},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Mode selection in multi-user full-duplex systems considering inter-user interference},\n  year = {2016},\n  pages = {769-772},\n  abstract = {This paper focuses on multi-user full-duplex (FD) systems. When multiple users share the same resources, using FD degrades the sum capacity compared to using half-duplex (HD) because of the inter-user interference (IUI) that arises between two FD users. This factor highlights the need for a good strategy for switching between FD and HD modes. We address this first by considering a two-user case and deriving the sum capacities. We then consider the general case of K users and compare the average sum capacities for all-FD mode, all-HD mode and the optimal mode case.},\n  keywords = {interference (signal);multiplexing;mode selection;multiuser full-duplex systems;inter-user interference;HD mode;FD mode;High definition video;Silicon;Interference;Uplink;Antennas;Downlink;Europe;Full-duplex;inter-user interference;mode selection},\n  doi = {10.1109/EUSIPCO.2016.7760352},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570254629.pdf},\n}\n\n
\n
\n\n\n
\n This paper focuses on multi-user full-duplex (FD) systems. When multiple users share the same resources, using FD degrades the sum capacity compared to using half-duplex (HD) because of the inter-user interference (IUI) that arises between two FD users. This factor highlights the need for a good strategy for switching between FD and HD modes. We address this first by considering a two-user case and deriving the sum capacities. We then consider the general case of K users and compare the average sum capacities for all-FD mode, all-HD mode and the optimal mode case.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust resource allocation for full-duplex cognitive radio systems.\n \n \n \n \n\n\n \n Sun, Y.; Ng, D. W. K.; Zlatanov, N.; and Schober, R.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 773-777, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760353,\n  author = {Y. Sun and D. W. K. Ng and N. Zlatanov and R. Schober},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust resource allocation for full-duplex cognitive radio systems},\n  year = {2016},\n  pages = {773-777},\n  abstract = {In this paper, we investigate resource allocation algorithm design for full-duplex (FD) cognitive radio systems. The secondary network employs a FD base station for serving multiple half-duplex downlink and uplink users simultaneously. We study the resource allocation design for minimizing the maximum interference leakage to primary users while providing quality of service for secondary users. The imperfectness of the channel state information of the primary users is taken into account for robust resource allocation algorithm design. The algorithm design is formulated as a non-convex optimization problem and solved optimally by applying semidefinite programming (SDP) relaxation. Simulation results not only show the significant reduction in interference leakage compared to baseline schemes, but also confirm the robustness of the proposed algorithm.},\n  keywords = {cognitive radio;concave programming;interference suppression;quality of service;relaxation theory;resource allocation;wireless channels;SDP relaxation;semidefinite programming relaxation;nonconvex optimization problem;channel state information;secondary user;quality of service;primary user;maximum interference leakage minimization;multiple half-duplex uplink user;multiple half-duplex downlink user;secondary network;FD base station;FD cognitive radio system;full-duplex cognitive radio system;robust resource allocation design;Resource management;Interference;Receivers;Optimization;Silicon;Algorithm design and analysis;Array signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760353},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255434.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we investigate resource allocation algorithm design for full-duplex (FD) cognitive radio systems. The secondary network employs a FD base station for serving multiple half-duplex downlink and uplink users simultaneously. We study the resource allocation design for minimizing the maximum interference leakage to primary users while providing quality of service for secondary users. The imperfectness of the channel state information of the primary users is taken into account for robust resource allocation algorithm design. The algorithm design is formulated as a non-convex optimization problem and solved optimally by applying semidefinite programming (SDP) relaxation. Simulation results not only show the significant reduction in interference leakage compared to baseline schemes, but also confirm the robustness of the proposed algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Uplink and downlink rate analysis of a full-duplex C-RAN with radio remote head association.\n \n \n \n \n\n\n \n Mohammadi, M.; Suraweera, H. A.; and Tellambura, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 778-782, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"UplinkPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760354,\n  author = {M. Mohammadi and H. A. Suraweera and C. Tellambura},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Uplink and downlink rate analysis of a full-duplex C-RAN with radio remote head association},\n  year = {2016},\n  pages = {778-782},\n  abstract = {We characterize the uplink (UL) and downlink (DL) rates of a full-duplex cloud radio access network (C-RAN) with all participate and single best remote radio head (RRH) association schemes. Specifically, multi-antenna equipped RRHs distributed according to a Poisson point process is assumed. The UL and DL sum rate of the single best RRH association scheme is maximized using receive and transmit beamformer designs at the UL and DL RRHs, respectively. In the case of the single best strategy, we study both optimum and sub-optimum schemes based on maximum ratio combining/maximal ratio transmission (MRC/MRT) and zero-forcing/MRT (ZF/MRT) processing. Numerical results show that significant performance improvements can be achieved by using the full-duplex mode as compared to the half-duplex mode. Moreover, the choice of the beamforming design and the RRH association scheme have a major influence on the achievable full-duplex gains.},\n  keywords = {antenna arrays;array signal processing;cloud computing;diversity reception;radio access networks;stochastic processes;beamforming design;full-duplex mode;ZF-MRT processing;zero-forcing-MRT processing;MRC;maximal ratio transmission;maximum ratio combining;receive beamformer designs;transmit beamformer design;single best RRH association scheme;Poisson point process;multiantenna system;full-duplex cloud radio access network;DL rate analysis;UL rate analysis;radio remote head association;full-duplex C-RAN;downlink rate analysis;uplink rate analysis;Interference;Lead;Array signal processing;Antennas;Rayleigh channels},\n  doi = {10.1109/EUSIPCO.2016.7760354},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255616.pdf},\n}\n\n
\n
\n\n\n
\n We characterize the uplink (UL) and downlink (DL) rates of a full-duplex cloud radio access network (C-RAN) with all participate and single best remote radio head (RRH) association schemes. Specifically, multi-antenna equipped RRHs distributed according to a Poisson point process is assumed. The UL and DL sum rate of the single best RRH association scheme is maximized using receive and transmit beamformer designs at the UL and DL RRHs, respectively. In the case of the single best strategy, we study both optimum and sub-optimum schemes based on maximum ratio combining/maximal ratio transmission (MRC/MRT) and zero-forcing/MRT (ZF/MRT) processing. Numerical results show that significant performance improvements can be achieved by using the full-duplex mode as compared to the half-duplex mode. Moreover, the choice of the beamforming design and the RRH association scheme have a major influence on the achievable full-duplex gains.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Digitally-controlled RF self-interference canceller for full-duplex radios.\n \n \n \n \n\n\n \n Tamminen, J.; Turunen, M.; Korpi, D.; Huusari, T.; Choi, Y.; Talwar, S.; and Valkama, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 783-787, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Digitally-controlledPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760355,\n  author = {J. Tamminen and M. Turunen and D. Korpi and T. Huusari and Y. Choi and S. Talwar and M. Valkama},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Digitally-controlled RF self-interference canceller for full-duplex radios},\n  year = {2016},\n  pages = {783-787},\n  abstract = {This paper addresses the self-interference (SI) cancellation in a full-duplex radio transceiver. In particular, we focus on shared-antenna based full-duplex transceivers where the self-interference coupling channel is always frequency-selective and can also be strongly time-varying depending on the antenna matching characteristics and reflections from the surroundings. A novel digitally-controlled RF self-interference canceller structure is described, being able to process the signals in a frequency-selective manner as well as track adaptively the time-varying SI features, stemming from the fast digital control loop. A complete demonstrator board is developed, reported and measured, incorporating both the RF processing and the digital control processing. Comprehensive RF measurements are then also carried out and reported at 2.4GHz ISM band, evidencing more than 40dBs of active RF cancellation gain up to 80MHz instantaneous waveform bandwidths. Furthermore, real-time self-adaptive tracking features are successfully demonstrated.},\n  keywords = {digital control;interference suppression;radio transceivers;time-varying channels;UHF antennas;UHF measurement;real-time self-adaptive tracking features;instantaneous waveform bandwidths;ISM band;RF measurements;digital control processing;RF processing;complete demonstrator board;antenna matching characteristics;time-varying channel;frequency-selective channel;self-interference coupling channel;shared-antenna based full-duplex transceivers;full-duplex radio transceiver;SI cancellation;digitally-controlled RF self-interference canceller;frequency 2.4 GHz;Radio frequency;Interference cancellation;Process control;Digital control;Baseband;Voltage control;Couplings;self-interference;active RF cancellation;adaptive filtering;digital control;wideband cancellation;self-adaptive systems;self-healing;RF measurements},\n  doi = {10.1109/EUSIPCO.2016.7760355},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255804.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the self-interference (SI) cancellation in a full-duplex radio transceiver. In particular, we focus on shared-antenna based full-duplex transceivers where the self-interference coupling channel is always frequency-selective and can also be strongly time-varying depending on the antenna matching characteristics and reflections from the surroundings. A novel digitally-controlled RF self-interference canceller structure is described, being able to process the signals in a frequency-selective manner as well as track adaptively the time-varying SI features, stemming from the fast digital control loop. A complete demonstrator board is developed, reported and measured, incorporating both the RF processing and the digital control processing. Comprehensive RF measurements are then also carried out and reported at 2.4GHz ISM band, evidencing more than 40dBs of active RF cancellation gain up to 80MHz instantaneous waveform bandwidths. Furthermore, real-time self-adaptive tracking features are successfully demonstrated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Interference cancellation architecture for full-duplex system with GFDM signaling.\n \n \n \n \n\n\n \n Chung, W.; Hong, D.; Wichman, R.; and Riihonen, T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 788-792, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"InterferencePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760356,\n  author = {W. Chung and D. Hong and R. Wichman and T. Riihonen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Interference cancellation architecture for full-duplex system with GFDM signaling},\n  year = {2016},\n  pages = {788-792},\n  abstract = {This paper concerns the design of in-band full-duplex transceivers that employ generalized frequency-division multiplexing (GFDM). The composite of these two timely concepts is a promising candidate technology for emerging 5G systems since the GFDM waveform is advantageous to flexible spectrum use whereas full-duplex operation can significantly improve spectral efficiency. The main technical challenge in full-duplex transceivers at large is to mitigate their inherent self-interference due to simultaneous transmission and reception. In the case of GFDM that is non-orthogonal by design, interference cancellation becomes even more challenging since the interfering signal is subject to intricate coupling between all subchannels. Thus, we first develop a sophisticated frequency-domain cancellation architecture for removing all the self-interference components. Furthermore, by exploiting the specific structure of the interference pattern, we further modify the scheme into one that allows flexible control and reduction of computational complexity. Finally, our simulation results illustrate the trade-off between cancellation performance and system complexity, giving insights into the implementation of interference cancellation when we aim at achieving both low error rate and low complexity.},\n  keywords = {5G mobile communication;frequency division multiplexing;interference suppression;OFDM modulation;radio spectrum management;radio transceivers;telecommunication signalling;wireless channels;5G systems;in-band full-duplex transceivers;GFDM signaling;generalized frequency-division multiplexing;full-duplex system;interference cancellation architecture;Interference cancellation;Transceivers;Frequency-domain analysis;Computational complexity;OFDM},\n  doi = {10.1109/EUSIPCO.2016.7760356},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256224.pdf},\n}\n\n
\n
\n\n\n
\n This paper concerns the design of in-band full-duplex transceivers that employ generalized frequency-division multiplexing (GFDM). The composite of these two timely concepts is a promising candidate technology for emerging 5G systems since the GFDM waveform is advantageous to flexible spectrum use whereas full-duplex operation can significantly improve spectral efficiency. The main technical challenge in full-duplex transceivers at large is to mitigate their inherent self-interference due to simultaneous transmission and reception. In the case of GFDM that is non-orthogonal by design, interference cancellation becomes even more challenging since the interfering signal is subject to intricate coupling between all subchannels. Thus, we first develop a sophisticated frequency-domain cancellation architecture for removing all the self-interference components. Furthermore, by exploiting the specific structure of the interference pattern, we further modify the scheme into one that allows flexible control and reduction of computational complexity. Finally, our simulation results illustrate the trade-off between cancellation performance and system complexity, giving insights into the implementation of interference cancellation when we aim at achieving both low error rate and low complexity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Secrecy outage probability of wirelessly powered wiretap channels.\n \n \n \n \n\n\n \n Jiang, X.; Zhong, C.; Chen, X.; and Zhang, Z.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 793-797, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SecrecyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760357,\n  author = {X. Jiang and C. Zhong and X. Chen and Z. Zhang},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Secrecy outage probability of wirelessly powered wiretap channels},\n  year = {2016},\n  pages = {793-797},\n  abstract = {This paper considers a wirelessly powered wiretap channel, where an energy constrained information source, powered by a dedicated power beacon, communicates with a legitimate user in the presence of a passive eavesdropper. The source is assumed to have multiple antennas, while the other three nodes are equipped with a single antenna each. Considering a simple time-switching design where power transfer and information transmission are separated in time. We investigate two popular transmission schemes, namely maximum ratio transmission (MRT) and transmit antenna selection (TAS). Closed-form expressions are derived for the achievable secrecy outage probability of both schemes. In addition, simple approximations are obtained at the high signal-to-noise ratio (SNR) regime. Our results demonstrate that the more channel state information (CSI) available, the better the secrecy performance. For instance, with full CSI of the main channel, the system can achieve substantial secrecy diversity gain. On the other hand, without the CSI of the main channel, no diversity gain can be attained. Finally, our theoretical claims are validated by the numerical results.},\n  keywords = {antenna arrays;diversity reception;probability;transmitting antennas;wireless channels;secrecy outage probability;wirelessly powered wiretap channels;dedicated power beacon;multiple antennas;single antenna;maximum ratio transmission;transmit antenna selection;MRT;TAS;channel state information;CSI;substantial secrecy diversity gain;Signal to noise ratio;Manganese;Transmitting antennas;Physical layer;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760357},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250721.pdf},\n}\n\n
\n
\n\n\n
\n This paper considers a wirelessly powered wiretap channel, where an energy constrained information source, powered by a dedicated power beacon, communicates with a legitimate user in the presence of a passive eavesdropper. The source is assumed to have multiple antennas, while the other three nodes are equipped with a single antenna each. Considering a simple time-switching design where power transfer and information transmission are separated in time. We investigate two popular transmission schemes, namely maximum ratio transmission (MRT) and transmit antenna selection (TAS). Closed-form expressions are derived for the achievable secrecy outage probability of both schemes. In addition, simple approximations are obtained at the high signal-to-noise ratio (SNR) regime. Our results demonstrate that the more channel state information (CSI) available, the better the secrecy performance. For instance, with full CSI of the main channel, the system can achieve substantial secrecy diversity gain. On the other hand, without the CSI of the main channel, no diversity gain can be attained. Finally, our theoretical claims are validated by the numerical results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sum rate maximization for full duplex wireless-powered communication networks.\n \n \n \n \n\n\n \n Nguyen, V.; Nguyen, H. V.; Kang, G.; Kim, H. M.; and Shin, O.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 798-802, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SumPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760358,\n  author = {V. Nguyen and H. V. Nguyen and G. Kang and H. M. Kim and O. Shin},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Sum rate maximization for full duplex wireless-powered communication networks},\n  year = {2016},\n  pages = {798-802},\n  abstract = {We consider a full duplex multiuser multiple-input multiple-output system and study the sum rate maximization in a wireless powered communication networks. We assume that the users of uplink (UL) channel have no available power supply and thus a harvest-then-transmit protocol is utilized. Specifically, the base station (BS) first conveys simultaneously the energy to all UL users via energy beamforming and also transmit the information to all users in the downlink (DL) channel via information beamforming. Then, the users in the UL channel send their independent information to the BS using their harvested energy in the second phase. Since the utility function of the sum rate maximization is nonconvex, and thus, the optimal solution is difficult to find in general. To solve this problem, we propose an iterative algorithm to obtain suboptimal solution based on semidefinite program in each iteration. Simulation results demonstrate that the proposed design outperforms the conventional design.},\n  keywords = {array signal processing;concave programming;iterative methods;MIMO communication;multi-access systems;protocols;wireless channels;full duplex wireless-powered communication network;duplex multiuser multiple input multiple output system;uplink channel;UL channel;harvest-then-transmit protocol;base station;BS;energy beamforming;downlink channel;DL channel;information beamforming;utility function;sum rate maximization;iterative algorithm;suboptimal solution;semidefinite program;nonconvex program;Receiving antennas;Downlink;Silicon;Energy exchange;Uplink;Transmitting antennas;Covariance matrices},\n  doi = {10.1109/EUSIPCO.2016.7760358},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251378.pdf},\n}\n\n
\n
\n\n\n
\n We consider a full duplex multiuser multiple-input multiple-output system and study the sum rate maximization in a wireless powered communication networks. We assume that the users of uplink (UL) channel have no available power supply and thus a harvest-then-transmit protocol is utilized. Specifically, the base station (BS) first conveys simultaneously the energy to all UL users via energy beamforming and also transmit the information to all users in the downlink (DL) channel via information beamforming. Then, the users in the UL channel send their independent information to the BS using their harvested energy in the second phase. Since the utility function of the sum rate maximization is nonconvex, and thus, the optimal solution is difficult to find in general. To solve this problem, we propose an iterative algorithm to obtain suboptimal solution based on semidefinite program in each iteration. Simulation results demonstrate that the proposed design outperforms the conventional design.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Secure wireless communications with relay selection and wireless powered transfer.\n \n \n \n \n\n\n \n Nguyen, N.; Huang, Y.; Duong, T. Q.; Hadzi-Velkov, Z.; and Canberk, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 803-807, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SecurePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760359,\n  author = {N. Nguyen and Y. Huang and T. Q. Duong and Z. Hadzi-Velkov and B. Canberk},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Secure wireless communications with relay selection and wireless powered transfer},\n  year = {2016},\n  pages = {803-807},\n  abstract = {In this paper, we investigate the secrecy performance of an energy harvesting relay network, where a legitimate source communicates with a legitimate destination via the assistance of multiple trusted relays. In the considered system, the source and relays deploy the time-switching based radio frequency energy harvesting technique to harvest energy from a multi-antenna beacon. Different antenna selection and relay selection schemes are applied to enhance the security of the system. Specifically, two relay selection schemes based on the partial and full knowledge of channel state information are proposed. The exact closed-form expressions of the systems secrecy outage probability in these schemes are derived. A Monte-Carlo based simulation validates our analysis results.},\n  keywords = {antenna arrays;energy harvesting;inductive power transmission;Monte Carlo methods;relay networks (telecommunication);telecommunication network reliability;telecommunication power management;telecommunication security;wireless channels;wireless communication security enhancement;relay selection;wireless powered transfer;energy harvesting relay network;multiple trusted relays;time-switching-based radio frequency energy harvesting technique;multiantenna beacon;antenna selection;channel state information;secrecy outage probability;Monte-Carlo simulation;Energy harvesting;Wireless communication;Antennas;Wireless sensor networks;Security;Relay networks (telecommunications)},\n  doi = {10.1109/EUSIPCO.2016.7760359},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251971.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we investigate the secrecy performance of an energy harvesting relay network, where a legitimate source communicates with a legitimate destination via the assistance of multiple trusted relays. In the considered system, the source and relays deploy the time-switching based radio frequency energy harvesting technique to harvest energy from a multi-antenna beacon. Different antenna selection and relay selection schemes are applied to enhance the security of the system. Specifically, two relay selection schemes based on the partial and full knowledge of channel state information are proposed. The exact closed-form expressions of the systems secrecy outage probability in these schemes are derived. A Monte-Carlo based simulation validates our analysis results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Blockage effects on joint information energy transfer in directional ad-hoc networks.\n \n \n \n\n\n \n Psomas, C.; and Krikidis, I.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 808-812, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760360,\n  author = {C. Psomas and I. Krikidis},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Blockage effects on joint information energy transfer in directional ad-hoc networks},\n  year = {2016},\n  pages = {808-812},\n  abstract = {The impact of blockages in the performance of next generation wireless networks has recently attracted a lot of attention. In this case, the existence of blockages can provide performance gains as the aggregate interference at a receiver is reduced. Furthermore, the employment of directional antennas at the network's nodes can further boost the performance through the power gains which antenna directionality produces. However, the impact of blockages on wireless power transfer has not been investigated equally. In this paper, we consider a bipolar ad-hoc network where the nodes employ directional antennas and have simultaneous wireless information and power transfer capabilities. We study the effects of blockages and directionality on the energy harvested by a receiver and show that the performance gains from the existence of blockages can be adjusted in order to increase the average harvested energy.},\n  keywords = {ad hoc networks;antennas;energy harvesting;next generation networks;blockage effects;energy transfer;directional ad-hoc networks;next generation wireless networks;antenna directionality;bipolar ad-hoc network;wireless information;directional antennas;power transfer capabilities;Receivers;Interference;Transmitters;Antennas;Energy harvesting;Ad hoc networks;Wireless communication;SWIPT;power splitting;blockages;sectorized antennas;stochastic geometry},\n  doi = {10.1109/EUSIPCO.2016.7760360},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n The impact of blockages in the performance of next generation wireless networks has recently attracted a lot of attention. In this case, the existence of blockages can provide performance gains as the aggregate interference at a receiver is reduced. Furthermore, the employment of directional antennas at the network's nodes can further boost the performance through the power gains which antenna directionality produces. However, the impact of blockages on wireless power transfer has not been investigated equally. In this paper, we consider a bipolar ad-hoc network where the nodes employ directional antennas and have simultaneous wireless information and power transfer capabilities. We study the effects of blockages and directionality on the energy harvested by a receiver and show that the performance gains from the existence of blockages can be adjusted in order to increase the average harvested energy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Accumulate then forward: An opportunistic relaying protocol for wireless-powered cooperative communications.\n \n \n \n \n\n\n \n Li, Z.; Chen, H. H.; Gu, Y.; Li, Y.; and Vucetic, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 813-817, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AccumulatePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760361,\n  author = {Z. Li and H. H. Chen and Y. Gu and Y. Li and B. Vucetic},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Accumulate then forward: An opportunistic relaying protocol for wireless-powered cooperative communications},\n  year = {2016},\n  pages = {813-817},\n  abstract = {This paper investigates a wireless-powered cooperative communication network consisting of a source, a destination and a multi-antenna decode-and-forward relay. We consider the relay as a wireless-powered node that has no external power supply; but it is equipped with an energy harvesting (EH) unit and a rechargeable battery such that it can harvest and accumulate energy from radio-frequency signals broadcast by the source. By fully incorporating the EH feature of the relay, we develop an opportunistic relaying protocol, termed accumulate-then-forward (ATF), for the considered WPCCN. We then adopt the discrete Markov chain to model the dynamic charging and discharging behaviors of the relay battery. Based on this, we derive a closed-form expression for the exact outage probability of the proposed ATF protocol. Numerical results show that the ATF scheme can outperform the direct transmission one, especially when the amount of energy consumed by relay for information forwarding is optimized.},\n  keywords = {cooperative communication;decode and forward communication;Markov processes;probability;protocols;relay networks (telecommunication);opportunistic relaying protocol;wireless-powered cooperative communications;multiantenna decode-and-forward relay;wireless-powered node;energy harvesting unit;rechargeable battery;radio-frequency signals;accumulate-then-forward;discrete Markov chain;relay battery;outage probability;information forwarding;Relays;Energy harvesting;Batteries;Signal to noise ratio;Protocols;Decoding;Fading channels},\n  doi = {10.1109/EUSIPCO.2016.7760361},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570258528.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates a wireless-powered cooperative communication network consisting of a source, a destination and a multi-antenna decode-and-forward relay. We consider the relay as a wireless-powered node that has no external power supply; but it is equipped with an energy harvesting (EH) unit and a rechargeable battery such that it can harvest and accumulate energy from radio-frequency signals broadcast by the source. By fully incorporating the EH feature of the relay, we develop an opportunistic relaying protocol, termed accumulate-then-forward (ATF), for the considered WPCCN. We then adopt the discrete Markov chain to model the dynamic charging and discharging behaviors of the relay battery. Based on this, we derive a closed-form expression for the exact outage probability of the proposed ATF protocol. Numerical results show that the ATF scheme can outperform the direct transmission one, especially when the amount of energy consumed by relay for information forwarding is optimized.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On secrecy outage of MISO SWIPT systems in the presence of imperfect CSI.\n \n \n \n \n\n\n \n Pan, G.; Lei, H.; Deng, Y.; Fan, L.; Chen, Y.; and Ding, Z.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 818-822, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760362,\n  author = {G. Pan and H. Lei and Y. Deng and L. Fan and Y. Chen and Z. Ding},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On secrecy outage of MISO SWIPT systems in the presence of imperfect CSI},\n  year = {2016},\n  pages = {818-822},\n  abstract = {In this work, a multiple-input single-output (MISO) simultaneous wireless information and power transfer (SWIPT) system including one base station (BS) equipped with multiple antennas, one desired single-antenna information receiver (IR) and N (N > 1) single-antenna energy-harvesting receivers (ERs) is considered. By considering that the information signal of the desired IR may be eavesdropped by ERs if ERs are malicious, we investigate the secrecy performance of the target MISO SWIPT system when imperfect channel state information (CSI) is available and adopted for transmit antenna selection at the BS. Considering that each eavesdropping link experiences independent not necessarily identically distributed Rayleigh fading, the closed-form expressions for the exact and the asymptotic secrecy outage probability are derived and verified by simulation results.},\n  keywords = {antenna arrays;energy harvesting;probability;radio receivers;radiofrequency power transmission;Rayleigh channels;telecommunication network reliability;telecommunication power management;transmitting antennas;transmit antenna selection;eavesdropping link;identically distributed Rayleigh fading;asymptotic secrecy outage probability;imperfect channel state information;single-antenna energy-harvesting receiver;single-antenna information receiver;multiple antennas;base station;simultaneous wireless information and power transfer system;multiple-input single-output system;imperfect CSI;MISO SWIPT system secrecy outage;Erbium;Antennas;Receivers;MISO;Rayleigh channels;Simultaneous wireless information and power transfer;secrecy outage probability;multiple-input single-output;channel state information},\n  doi = {10.1109/EUSIPCO.2016.7760362},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570249329.pdf},\n}\n\n
\n
\n\n\n
\n In this work, a multiple-input single-output (MISO) simultaneous wireless information and power transfer (SWIPT) system including one base station (BS) equipped with multiple antennas, one desired single-antenna information receiver (IR) and N (N > 1) single-antenna energy-harvesting receivers (ERs) is considered. By considering that the information signal of the desired IR may be eavesdropped by ERs if ERs are malicious, we investigate the secrecy performance of the target MISO SWIPT system when imperfect channel state information (CSI) is available and adopted for transmit antenna selection at the BS. Considering that each eavesdropping link experiences independent not necessarily identically distributed Rayleigh fading, the closed-form expressions for the exact and the asymptotic secrecy outage probability are derived and verified by simulation results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the use of channel models and channel estimation techniques for massive MIMO systems.\n \n \n \n \n\n\n \n Kuerbis, M.; Balasubramanya, N. M.; Lampe, L.; and Lampe, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 823-827, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760363,\n  author = {M. Kuerbis and N. M. Balasubramanya and L. Lampe and A. Lampe},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On the use of channel models and channel estimation techniques for massive MIMO systems},\n  year = {2016},\n  pages = {823-827},\n  abstract = {Massive multiple-input multiple-output (MIMO) is a key technology driving the 5G evolution. It relies on the use a large number of antennas at the base-station to improve network performance. The performance of massive MIMO systems is often limited by imperfect channel estimation because of pilot contamination. Recently, several channel estimation techniques have been proposed to minimize the performance degradation. However, the assessment of these techniques in the literature has often been conducted considering standard channel models, like the independent Rayleigh fading model and Clarke's multipath model, which do not consider spatial correlation. In this work, we investigate different channel models used and proposed for massive MIMO transmission and, through numerical studies, highlight their effect on the performance of the aforementioned channel estimation techniques. Based on this we recommend the use of channel models that capture the spatial correlation between antennas and different user channels.},\n  keywords = {5G mobile communication;channel estimation;MIMO communication;multipath channels;Rayleigh channels;channel model;channel estimation technique;massive MIMO systems;massive multiple-input multiple-output systems;5G evolution;antennas;base-station;imperfect channel estimation;pilot contamination;standard channel model;independent Rayleigh fading model;Clarke multipath model;spatial correlation;massive MIMO transmission;MIMO;Channel estimation;Correlation;Channel models;Antennas;Computational modeling;Pollution measurement},\n  doi = {10.1109/EUSIPCO.2016.7760363},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255217.pdf},\n}\n\n
\n
\n\n\n
\n Massive multiple-input multiple-output (MIMO) is a key technology driving the 5G evolution. It relies on the use a large number of antennas at the base-station to improve network performance. The performance of massive MIMO systems is often limited by imperfect channel estimation because of pilot contamination. Recently, several channel estimation techniques have been proposed to minimize the performance degradation. However, the assessment of these techniques in the literature has often been conducted considering standard channel models, like the independent Rayleigh fading model and Clarke's multipath model, which do not consider spatial correlation. In this work, we investigate different channel models used and proposed for massive MIMO transmission and, through numerical studies, highlight their effect on the performance of the aforementioned channel estimation techniques. Based on this we recommend the use of channel models that capture the spatial correlation between antennas and different user channels.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On wireless power transfer in two-tier massive MIMO HetNets: Energy and rate analysis.\n \n \n \n \n\n\n \n Zhu, Y.; Wang, L.; Wong, K.; and Jin, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 828-832, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760364,\n  author = {Y. Zhu and L. Wang and K. Wong and S. Jin},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On wireless power transfer in two-tier massive MIMO HetNets: Energy and rate analysis},\n  year = {2016},\n  pages = {828-832},\n  abstract = {In this paper, we investigate the potential application of wireless power transfer (WPT) in heterogeneous networks (HetNets) with massive multiple-input multiple-output (MIMO) antennas. Users first harvest energy from the downlink WPT, and then use the harvested energy for uplink transmission. We adopt the downlink received signal power (DRSP) based user association to maximize the harvested energy, and address the impact of massive MIMO on the user association. By using new statistical properties, we then obtain the exact expressions for the average harvested energy and the average uplink achievable rate of a user in such networks. Numerical results corroborate our analysis and demonstrate that compared to deploying more small cells, the use of a large number of antennas is more appealing since it brings in significant increase in the harvested energy of the HetNets. In addition, results illustrate that serving more users in the massive MIMO aided macrocells decreases the harvested energy and the uplink achievable rate of the HetNets.},\n  keywords = {antenna arrays;energy harvesting;inductive power transmission;MIMO communication;telecommunication power management;downlink wireless power transfer;two-tier massive MIMO HetNets;energy analysis;rate analysis;heterogeneous networks;massive multiple-input multiple-output antennas;MIMO antennas;energy harvesting;uplink transmission;downlink received signal power;user association;statistical properties;massive MIMO aided macrocells;MIMO;Uplink;Downlink;Interference;Fading channels;Macrocell networks;Energy harvesting},\n  doi = {10.1109/EUSIPCO.2016.7760364},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255539.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we investigate the potential application of wireless power transfer (WPT) in heterogeneous networks (HetNets) with massive multiple-input multiple-output (MIMO) antennas. Users first harvest energy from the downlink WPT, and then use the harvested energy for uplink transmission. We adopt the downlink received signal power (DRSP) based user association to maximize the harvested energy, and address the impact of massive MIMO on the user association. By using new statistical properties, we then obtain the exact expressions for the average harvested energy and the average uplink achievable rate of a user in such networks. Numerical results corroborate our analysis and demonstrate that compared to deploying more small cells, the use of a large number of antennas is more appealing since it brings in significant increase in the harvested energy of the HetNets. In addition, results illustrate that serving more users in the massive MIMO aided macrocells decreases the harvested energy and the uplink achievable rate of the HetNets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hybrid MU-MIMO and non-orthogonal multiple access design in wireless heterogeneous networks.\n \n \n \n \n\n\n \n Xu, Y.; Sun, H.; and Hu, R. Q.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 833-837, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"HybridPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760365,\n  author = {Y. Xu and H. Sun and R. Q. Hu},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Hybrid MU-MIMO and non-orthogonal multiple access design in wireless heterogeneous networks},\n  year = {2016},\n  pages = {833-837},\n  abstract = {In order to meet the ever increasing mobile application proliferation and data traffic growth, next-generation wireless networks or 5G networks are under a revolutionary technology innovation path towards these goals. In this paper, we introduce a hybrid MU-MIMO and NOMA design scheme in wireless heterogeneous networks to improve the system throughput and also to increase multi-user diversity gains by exploiting the heterogeneous nature of the supporting wireless networks. The best user cluster is formed in a NOMA group and then a precoding based MU-MIMO scheme is applied to NOMA composite signals. The problem is further formulated as a resource scheduling optimization problem to achieve the proportional fairness. Aiming to ensure the global optimality, a brute-force search algorithm is used to solve the problem. Simulations results show that the proposed scheme can improve the overall system performance notably.},\n  keywords = {5G mobile communication;MIMO communication;multi-access systems;next generation networks;search problems;telecommunication scheduling;telecommunication traffic;hybrid MU-MIMO;nonorthogonal multiple access design;wireless heterogeneous network;data traffic growth;mobile application;next-generation wireless network;5G network;multiuser diversity gain;NOMA composite signal;resource scheduling optimization problem;brute-force search algorithm;NOMA;Interference;Wireless communication;Throughput;Heterogeneous networks;Precoding;Downlink},\n  doi = {10.1109/EUSIPCO.2016.7760365},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256260.pdf},\n}\n\n
\n
\n\n\n
\n In order to meet the ever increasing mobile application proliferation and data traffic growth, next-generation wireless networks or 5G networks are under a revolutionary technology innovation path towards these goals. In this paper, we introduce a hybrid MU-MIMO and NOMA design scheme in wireless heterogeneous networks to improve the system throughput and also to increase multi-user diversity gains by exploiting the heterogeneous nature of the supporting wireless networks. The best user cluster is formed in a NOMA group and then a precoding based MU-MIMO scheme is applied to NOMA composite signals. The problem is further formulated as a resource scheduling optimization problem to achieve the proportional fairness. Aiming to ensure the global optimality, a brute-force search algorithm is used to solve the problem. Simulations results show that the proposed scheme can improve the overall system performance notably.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A novel angular parameters estimator for incoherently distributed sources.\n \n \n \n \n\n\n \n Cao, R.; Gao, F.; and Zhang, X.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 838-842, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760366,\n  author = {R. Cao and F. Gao and X. Zhang},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A novel angular parameters estimator for incoherently distributed sources},\n  year = {2016},\n  pages = {838-842},\n  abstract = {In this work, a new algorithm for angular parameters estimation of incoherently distributed sources is proposed. By using the general array manifold model, the nominal DOAs can be firstly separated from the original array manifold. Then a generalized shift invariance property inside the array manifold is identified, based on which the nominal DOAs can be estimated. The angular spreads are next estimated from the central moments of the angular distribution. Compared with the popular ESPRIT-ID algorithm, the proposed one could achieve higher accuracy, could handle more sources, and could be applied on a much more general array structure. Numerical simulations are provided to show the superior performance of the proposed algorithm over the existing works.},\n  keywords = {array signal processing;direction-of-arrival estimation;direction-of-arrival algorithm;DOA;general array structure;ESPRIT-ID algorithm;distributed sources;angular parameters estimation;Signal processing algorithms;Direction-of-arrival estimation;Estimation;Manifolds;Sensor arrays;Approximation algorithms},\n  doi = {10.1109/EUSIPCO.2016.7760366},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256594.pdf},\n}\n\n
\n
\n\n\n
\n In this work, a new algorithm for angular parameters estimation of incoherently distributed sources is proposed. By using the general array manifold model, the nominal DOAs can be firstly separated from the original array manifold. Then a generalized shift invariance property inside the array manifold is identified, based on which the nominal DOAs can be estimated. The angular spreads are next estimated from the central moments of the angular distribution. Compared with the popular ESPRIT-ID algorithm, the proposed one could achieve higher accuracy, could handle more sources, and could be applied on a much more general array structure. Numerical simulations are provided to show the superior performance of the proposed algorithm over the existing works.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A robust resource allocation algorithm for packet BIC-UFMC 5G wireless communications.\n \n \n \n \n\n\n \n Del Fiorentino, P.; Vitiello, C.; Lottici, V.; Giannetti, F.; and Luise, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 843-847, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760367,\n  author = {P. {Del Fiorentino} and C. Vitiello and V. Lottici and F. Giannetti and M. Luise},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A robust resource allocation algorithm for packet BIC-UFMC 5G wireless communications},\n  year = {2016},\n  pages = {843-847},\n  abstract = {In this paper, we present a novel resource allocation (RA) algorithm for packet Universal Filtered Multi-Carrier (UFMC) BIC-based communications, the latter being a novel modulation format possibly envisaged to be applied in 5G wireless systems. Assuming the perfect knowledge of the channel and capitalizing on the specific UFMC signal waveform, the proposed RA strategy optimizes the coding rate and bit loading within the overall bandwidth along with the per-subband power distribution. In the presence of a carrier offset and over fading selective channels, the results we obtained are twofold: i) the UFMC format reveals to be more robust than the conventional OFDM scheme; ii) the performance of the UFMC system itself is further boosted by the optimal choice of radio resources evaluated by the proposed RA algorithm.},\n  keywords = {5G mobile communication;fading channels;OFDM modulation;resource allocation;packet BIC-UFMC 5G wireless communications;resource allocation algorithm;RA algorithm;universal filtered multicarrier communications;modulation format;specific signal waveform;coding rate;bit loading;carrier offset;fading selective channels;OFDM scheme;radio resources;fifth-generation wireless systems;OFDM;Signal to noise ratio;Modulation;Synchronization;Robustness;Resource management;5G mobile communication},\n  doi = {10.1109/EUSIPCO.2016.7760367},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256833.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present a novel resource allocation (RA) algorithm for packet Universal Filtered Multi-Carrier (UFMC) BIC-based communications, the latter being a novel modulation format possibly envisaged to be applied in 5G wireless systems. Assuming the perfect knowledge of the channel and capitalizing on the specific UFMC signal waveform, the proposed RA strategy optimizes the coding rate and bit loading within the overall bandwidth along with the per-subband power distribution. In the presence of a carrier offset and over fading selective channels, the results we obtained are twofold: i) the UFMC format reveals to be more robust than the conventional OFDM scheme; ii) the performance of the UFMC system itself is further boosted by the optimal choice of radio resources evaluated by the proposed RA algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A novel block-shifted pilot design for multipair massive MIMO relaying.\n \n \n \n \n\n\n \n Pan, L.; Dai, Y.; Dong, X.; and Xu, W.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 848-852, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760368,\n  author = {L. Pan and Y. Dai and X. Dong and W. Xu},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A novel block-shifted pilot design for multipair massive MIMO relaying},\n  year = {2016},\n  pages = {848-852},\n  abstract = {We propose a design of block-shifted pilot scheme for the multipair massive multiple-input multiple-output (MIMO) relaying system. The proposed scheme balances the tradeoff between pilot transmission overhead and channel estimation accuracy in scenarios with limited length of coherence time interval, and thereby the system performance is dramatically improved. In the proposed scheme, pilots are properly designed so that they can be transmitted simultaneously with data to decrease channel estimation overheads. By exploiting the asymptotic orthogonality of massive MIMO channels, the source data can be exactly detected from the received signal, and then pilot-data interference can be effectively suppressed with assistance of the detected data in the destination channel estimation. In the block-shifted transmission pattern, the effective data transmission period is extended to improve system throughput. Both theoretical and numerical results confirm the superiority of the proposed scheme to conventional ones in limited coherence time interval scenarios.},\n  keywords = {channel estimation;interference suppression;MIMO communication;radiofrequency interference;relay networks (telecommunication);wireless channels;multipair massive MIMO relaying system performance;multipair massive multiple-input multiple-output relaying system;block-shifted pilot scheme design;pilot transmission overhead reduction;channel estimation overhead reduction;limited coherence time interval length;data transmission;massive MIMO channel asymptotic orthogonality;pilot-data interference suppression;block-shifted transmission pattern;Channel estimation;MIMO;Coherence;Uplink;Downlink;Interference;Data communication},\n  doi = {10.1109/EUSIPCO.2016.7760368},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570257663.pdf},\n}\n\n
\n
\n\n\n
\n We propose a design of block-shifted pilot scheme for the multipair massive multiple-input multiple-output (MIMO) relaying system. The proposed scheme balances the tradeoff between pilot transmission overhead and channel estimation accuracy in scenarios with limited length of coherence time interval, and thereby the system performance is dramatically improved. In the proposed scheme, pilots are properly designed so that they can be transmitted simultaneously with data to decrease channel estimation overheads. By exploiting the asymptotic orthogonality of massive MIMO channels, the source data can be exactly detected from the received signal, and then pilot-data interference can be effectively suppressed with assistance of the detected data in the destination channel estimation. In the block-shifted transmission pattern, the effective data transmission period is extended to improve system throughput. Both theoretical and numerical results confirm the superiority of the proposed scheme to conventional ones in limited coherence time interval scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Noise-adaptive perceptual weighting in the AMR-WB encoder for increased speech loudness in adverse far-end noise conditions.\n \n \n \n \n\n\n \n Jokinen, E.; and Bäckström, T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 853-857, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Noise-adaptivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760369,\n  author = {E. Jokinen and T. Bäckström},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Noise-adaptive perceptual weighting in the AMR-WB encoder for increased speech loudness in adverse far-end noise conditions},\n  year = {2016},\n  pages = {853-857},\n  abstract = {In mobile communications, environmental noise often reduces the quality and intelligibility of speech. Problems caused by far-end noise, in the sending side of the communication channel, can be alleviated by using a noise reducing preprocessing stage before the encoder. In this study, a modification increasing the robustness of the encoder itself to background noise is proposed. Specifically, by using information already present in the encoder, the proposed method adjusts the perceptual weighting filter based on the characteristics of the noise to increase the prominence of the speech over the background noise. To evaluate the performance of the enhancement, the modification is implemented in the adaptive multi-rate wideband encoder and compared to the standard AMR-WB encoder in subjective tests. The results suggest that while the proposed modification increases the loudness of speech without affecting the quality significantly, for some female speakers the standard encoder is preferred over the enhancement.},\n  keywords = {adaptive codes;filtering theory;interference suppression;mobile communication;speech coding;noise-adaptive perceptual weighting;speech loudness;far-end noise conditions;mobile communications;environmental noise;communication channel;noise reducing preprocessing stage;background noise;perceptual weighting filter;adaptive multirate wideband encoder;AMR-WB encoder;female speakers;Speech;Noise measurement;Signal to noise ratio;Encoding;Standards;Noise reduction;Speech coding;AMR-WB;far-end noise;perceptual weighting;loudness},\n  doi = {10.1109/EUSIPCO.2016.7760369},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251581.pdf},\n}\n\n
\n
\n\n\n
\n In mobile communications, environmental noise often reduces the quality and intelligibility of speech. Problems caused by far-end noise, in the sending side of the communication channel, can be alleviated by using a noise reducing preprocessing stage before the encoder. In this study, a modification increasing the robustness of the encoder itself to background noise is proposed. Specifically, by using information already present in the encoder, the proposed method adjusts the perceptual weighting filter based on the characteristics of the noise to increase the prominence of the speech over the background noise. To evaluate the performance of the enhancement, the modification is implemented in the adaptive multi-rate wideband encoder and compared to the standard AMR-WB encoder in subjective tests. The results suggest that while the proposed modification increases the loudness of speech without affecting the quality significantly, for some female speakers the standard encoder is preferred over the enhancement.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Independent vector analysis for source separation using an energy driven mixed student's T and super Gaussian source prior.\n \n \n \n \n\n\n \n Rafique, W.; Erateb, S.; Naqvi, S. M.; Dlay, S. S.; and Chambers, J. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 858-862, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"IndependentPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760370,\n  author = {W. Rafique and S. Erateb and S. M. Naqvi and S. S. Dlay and J. A. Chambers},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Independent vector analysis for source separation using an energy driven mixed student's T and super Gaussian source prior},\n  year = {2016},\n  pages = {858-862},\n  abstract = {Independent vector analysis (IVA) can thoretically avoid the permutation problem in frequency domain blind source separation by using a multivariate source prior to retain the dependency between different frequency bins of each source. The performance of the IVA method is however very dependent upon the choice of source prior. Recently, a fixed combination of the original super Gaussian, previously used in the IVA method, and the Student's t distributions has been found to offer performance improvement; but due to the non-stationary nature of speech, this combination should adapt to the statistical properties of the measured speech mixtures. Therefore, in this work we propose a new energy driven mixed multivariate Student's t and super Gaussian source prior for the IVA algorithm. For further performance improvement, the clique based IVA method is used to exploit the strong dependency between neighbouring frequency components. This new algorithm is evaluated on mixtures formed from speech signals from the TIMIT dataset and real room impulse responses and performance improvement is demonstrated over the conventional IVA method with fixed source prior.},\n  keywords = {blind source separation;Gaussian distribution;speech processing;statistical analysis;transient response;vectors;independent vector analysis;energy driven mixed student t;super Gaussian source prior;frequency domain blind source separation;multivariate source;frequency bins;student t distributions;nonstationary speech nature;statistical properties;measured speech mixtures;clique based IVA method;neighbouring frequency components;speech signals;TIMIT dataset;room impulse response;Speech;Adaptation models;Signal processing algorithms;Gaussian distribution;Signal processing;Europe;Probability density function;Blind source separation;independent vector analysis;binaural room impulse responses},\n  doi = {10.1109/EUSIPCO.2016.7760370},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251615.pdf},\n}\n\n
\n
\n\n\n
\n Independent vector analysis (IVA) can thoretically avoid the permutation problem in frequency domain blind source separation by using a multivariate source prior to retain the dependency between different frequency bins of each source. The performance of the IVA method is however very dependent upon the choice of source prior. Recently, a fixed combination of the original super Gaussian, previously used in the IVA method, and the Student's t distributions has been found to offer performance improvement; but due to the non-stationary nature of speech, this combination should adapt to the statistical properties of the measured speech mixtures. Therefore, in this work we propose a new energy driven mixed multivariate Student's t and super Gaussian source prior for the IVA algorithm. For further performance improvement, the clique based IVA method is used to exploit the strong dependency between neighbouring frequency components. This new algorithm is evaluated on mixtures formed from speech signals from the TIMIT dataset and real room impulse responses and performance improvement is demonstrated over the conventional IVA method with fixed source prior.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n High quality voice conversion by post-filtering the outputs of Gaussian processes.\n \n \n \n \n\n\n \n Xu, N.; Yao, X.; Jiang, A.; Liu, X.; and Bao, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 863-867, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"HighPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760371,\n  author = {N. Xu and X. Yao and A. Jiang and X. Liu and J. Bao},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {High quality voice conversion by post-filtering the outputs of Gaussian processes},\n  year = {2016},\n  pages = {863-867},\n  abstract = {Voice conversion is a technique that aims to transform the individuality of source speech so as to mimic that of target speech while keeping the message unaltered, where the Gaussian mixture model based methods are most commonly used. However, these methods suffer from over-smoothing and over-fitting problems. In our previous work, we proposed to use Gaussian processes to alleviate over-fitting. Despite its effectiveness, this method will inevitably lead to over-smoothing due to choosing the mean of predictive distribution of Gaussian processes as optimal estimation. Thus, in this paper we focus on addressing the over-smoothing problem by post-filtering the outputs of the standard Gaussian processes, resulting in more dynamics in the converted feature parameters. Experiments have confirmed the validity of the proposed method both objectively and subjectively.},\n  keywords = {filtering theory;Gaussian distribution;mixture models;speech processing;post-filtering;high quality voice conversion;source speech individuality;target speech;Gaussian mixture model;over-smoothing problem;over-fitting problem;Gaussian process predictive distribution;Training;Gaussian processes;Speech;Standards;Europe;Speech processing;voice conversion;over-smoothing;post-filtering;Gaussian processes},\n  doi = {10.1109/EUSIPCO.2016.7760371},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251730.pdf},\n}\n\n
\n
\n\n\n
\n Voice conversion is a technique that aims to transform the individuality of source speech so as to mimic that of target speech while keeping the message unaltered, where the Gaussian mixture model based methods are most commonly used. However, these methods suffer from over-smoothing and over-fitting problems. In our previous work, we proposed to use Gaussian processes to alleviate over-fitting. Despite its effectiveness, this method will inevitably lead to over-smoothing due to choosing the mean of predictive distribution of Gaussian processes as optimal estimation. Thus, in this paper we focus on addressing the over-smoothing problem by post-filtering the outputs of the standard Gaussian processes, resulting in more dynamics in the converted feature parameters. Experiments have confirmed the validity of the proposed method both objectively and subjectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Copy-synthesis of phrase-level utterances.\n \n \n \n\n\n \n Elie, B.; and Laprie, Y.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 868-872, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760372,\n  author = {B. Elie and Y. Laprie},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Copy-synthesis of phrase-level utterances},\n  year = {2016},\n  pages = {868-872},\n  abstract = {This paper presents a simulation framework for synthesizing speech from anatomically realistic data of the vocal tract. The acoustic propagation paradigm is appropriately chosen so that it can deal with complex geometries and a time-varying length of the vocal tract. The glottal source model designed in this paper allows partial closure of the glottis by branching a posterior chink in parallel to a classic lumped mass-spring model of the vocal folds. Temporal scenarios for the dynamic shapes of the vocal tract and the glottal configurations may be derived from the simultaneous acquisition of X-ray images and audio recording. Copy synthesis of a few French sentences shows the accuracy of the simulation framework to reproduce acoustic cues of natural phrase-level utterances containing most of French natural classes while considering the real geometric shape of the speaker.},\n  keywords = {speech;speech synthesis;speech synthesizing;anatomically realistic data;vocal tract;acoustic propagation paradigm;time-varying length;glottal source model;posterior chink;classic lumped mass-spring model;vocal folds;X-ray images;audio recording;copy synthesis;acoustic cues;natural phrase-level utterances;French natural classes;Computational modeling;Acoustics;Geometry;Mathematical model;Speech;Atmospheric modeling;Production;Copy synthesis;Coordination;Glottal chink;Vocal folds},\n  doi = {10.1109/EUSIPCO.2016.7760372},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n This paper presents a simulation framework for synthesizing speech from anatomically realistic data of the vocal tract. The acoustic propagation paradigm is appropriately chosen so that it can deal with complex geometries and a time-varying length of the vocal tract. The glottal source model designed in this paper allows partial closure of the glottis by branching a posterior chink in parallel to a classic lumped mass-spring model of the vocal folds. Temporal scenarios for the dynamic shapes of the vocal tract and the glottal configurations may be derived from the simultaneous acquisition of X-ray images and audio recording. Copy synthesis of a few French sentences shows the accuracy of the simulation framework to reproduce acoustic cues of natural phrase-level utterances containing most of French natural classes while considering the real geometric shape of the speaker.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Unsupervised learning of temporal receptive fields using convolutional RBM for ASR task.\n \n \n \n \n\n\n \n Sailor, H. B.; and Patil, H. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 873-877, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"UnsupervisedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760373,\n  author = {H. B. Sailor and H. A. Patil},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Unsupervised learning of temporal receptive fields using convolutional RBM for ASR task},\n  year = {2016},\n  pages = {873-877},\n  abstract = {There has been a significant research attention for unsupervised representation learning to learn the features for speech processing applications. In this paper, we investigate unsupervised representation learning using Convolutional Restricted Boltzmann Machine (ConvRBM) with rectified units for speech recognition task. Temporal modulation representation is learned using log Mel-spectrogram as an input to ConvRBM. ConvRBM as modulation features and filterbank as spectral features were separately trained on DNNs and then system combination is used. With our proposed setup, ConvRBM features were applied to speech recognition task on TIMIT and WSJ0 databases. On TIMIT database, we achieved relative improvement of 5.93% in PER on test set compared to only filterbank features. For WSJ0 database, we achieved relative improvement of 3.63-4.3% in WER on test sets compared to filterbank features. Hence, DNN trained on ConvRBM with rectified units provide significant complementary information in terms of temporal modulation features.},\n  keywords = {Boltzmann machines;channel bank filters;modulation;speech processing;unsupervised learning;unsupervised learning;temporal receptive fields;RBM;ASR;speech processing;convolutional restricted Boltzmann machine;temporal modulation representation;Mel-spectrogram;ConvRBM;TIMIT;WSJ0 databases;filterbank;DNN;temporal modulation;Feature extraction;Speech;Databases;Modulation;Convolution;Speech recognition;Mel frequency cepstral coefficient;Convolutional RBM;temporal modulations;speech recognition;deep neural networks},\n  doi = {10.1109/EUSIPCO.2016.7760373},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252079.pdf},\n}\n\n
\n
\n\n\n
\n There has been a significant research attention for unsupervised representation learning to learn the features for speech processing applications. In this paper, we investigate unsupervised representation learning using Convolutional Restricted Boltzmann Machine (ConvRBM) with rectified units for speech recognition task. Temporal modulation representation is learned using log Mel-spectrogram as an input to ConvRBM. ConvRBM as modulation features and filterbank as spectral features were separately trained on DNNs and then system combination is used. With our proposed setup, ConvRBM features were applied to speech recognition task on TIMIT and WSJ0 databases. On TIMIT database, we achieved relative improvement of 5.93% in PER on test set compared to only filterbank features. For WSJ0 database, we achieved relative improvement of 3.63-4.3% in WER on test sets compared to filterbank features. Hence, DNN trained on ConvRBM with rectified units provide significant complementary information in terms of temporal modulation features.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Text-informed speech inpainting via voice conversion.\n \n \n \n \n\n\n \n Prablanc, P.; Ozerov, A.; Duong, N. Q. K.; and Pérez, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 878-882, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Text-informedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760374,\n  author = {P. Prablanc and A. Ozerov and N. Q. K. Duong and P. Pérez},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Text-informed speech inpainting via voice conversion},\n  year = {2016},\n  pages = {878-882},\n  abstract = {The problem of speech inpainting consists in recovering some parts in a speech signal that are missing for some reasons. To our best knowledge none of the existing methods allows satisfactory inpainting of missing parts of large size such as one second and longer. In this work we address this challenging scenario. Since in the case of such long missing parts entire words can be lost, we assume that the full text uttered in the speech signal is known. This leads to a new concept of text-informed speech inpainting. To solve this problem we propose a method that is based on synthesizing the missing speech by a speech synthesizer, on modifying its vocal characteristics via a voice conversion method, and on filling in the missing part with the resulting converted speech sample. We carried subjective listening tests to compare the proposed approach with two baseline methods.},\n  keywords = {audio signal processing;speech synthesis;text analysis;word processing;text-informed speech inpainting;speech signal;speech synthesizer;vocal characteristics;voice conversion method;Speech;Training;Europe;Predictive models;Speech enhancement;Audio inpainting;speech inpainting;voice conversion;Gaussian mixture model;speech synthesis},\n  doi = {10.1109/EUSIPCO.2016.7760374},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251948.pdf},\n}\n\n
\n
\n\n\n
\n The problem of speech inpainting consists in recovering some parts in a speech signal that are missing for some reasons. To our best knowledge none of the existing methods allows satisfactory inpainting of missing parts of large size such as one second and longer. In this work we address this challenging scenario. Since in the case of such long missing parts entire words can be lost, we assume that the full text uttered in the speech signal is known. This leads to a new concept of text-informed speech inpainting. To solve this problem we propose a method that is based on synthesizing the missing speech by a speech synthesizer, on modifying its vocal characteristics via a voice conversion method, and on filling in the missing part with the resulting converted speech sample. We carried subjective listening tests to compare the proposed approach with two baseline methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Active noise control for pulse signals by wave field synthesis.\n \n \n \n \n\n\n \n Lapini, A.; Biagini, M.; Borchi, F.; Carfagni, M.; and Argenti, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 883-887, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ActivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760375,\n  author = {A. Lapini and M. Biagini and F. Borchi and M. Carfagni and F. Argenti},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Active noise control for pulse signals by wave field synthesis},\n  year = {2016},\n  pages = {883-887},\n  abstract = {Active noise control (ANC) techniques have been intensively studied and applied for the cancellation of stationary noise. More recently, adaptive solutions for the case of impulsive noise, i.e. stochastic processes for which statistical moments superior to the first are not defined, have been proposed in the literature. Nevertheless, such a model fits a limited class of impulsive disturbances that could be experienced in practice. This paper introduces a preliminary study on a non-adaptive deterministic ANC technique for pulse signals that relies on no statistical assumptions. In particular, the spatial audio rendering framework of Wave Field Synthesis is formally adopted in order to synthesize the cancelling acoustic field. Simulations in free field environment, including the analysis of impairments such as time mismatch and template mismatch, have been carried out, showing promising performances in terms of noise cancellation.},\n  keywords = {acoustic signal processing;impulse noise;noise abatement;stochastic processes;wave field synthesis;active noise control technique;pulse signal;stationary noise cancellation;impulsive noise;stochastic process;statistical moment;impulsive disturbance;nonadaptive deterministic ANC technique;spatial audio rendering framework;Apertures;Acoustics;Adaptation models;Microphone arrays;Noise measurement;Distortion measurement},\n  doi = {10.1109/EUSIPCO.2016.7760375},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252154.pdf},\n}\n\n
\n
\n\n\n
\n Active noise control (ANC) techniques have been intensively studied and applied for the cancellation of stationary noise. More recently, adaptive solutions for the case of impulsive noise, i.e. stochastic processes for which statistical moments superior to the first are not defined, have been proposed in the literature. Nevertheless, such a model fits a limited class of impulsive disturbances that could be experienced in practice. This paper introduces a preliminary study on a non-adaptive deterministic ANC technique for pulse signals that relies on no statistical assumptions. In particular, the spatial audio rendering framework of Wave Field Synthesis is formally adopted in order to synthesize the cancelling acoustic field. Simulations in free field environment, including the analysis of impairments such as time mismatch and template mismatch, have been carried out, showing promising performances in terms of noise cancellation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic tuning of probe noise for continuous acoustic feedback cancelation in hearing aids.\n \n \n \n \n\n\n \n Akhtar, M. T.; and Nishihara, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 888-892, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760376,\n  author = {M. T. Akhtar and A. Nishihara},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic tuning of probe noise for continuous acoustic feedback cancelation in hearing aids},\n  year = {2016},\n  pages = {888-892},\n  abstract = {In this paper, we propose a probe signal-based method for continuous acoustic adaptive feedback cancellation (AFC) in digital hearing aids. The main idea is to incorporate a time varying gain with the probe signal, such that a high level probe noise is injected during the transient state, and a low level probe signal is used after the system has converged. The proposed method is essentially based on two adaptive filters working in tandem. The weights between these two adaptive filters are exchanged by an efficient weight-transfer strategy, such that both adaptive filters give a good estimate of the acoustic feedback path. Simulation results demonstrate that the proposed method achieves good modeling accuracy, preserves good speech quality, and maintains high output SNR at the steady-state.},\n  keywords = {acoustic signal processing;adaptive filters;hearing aids;continuous acoustic adaptive feedback cancellation;digital hearing aids;automatic tuning;probe noise;probe signal;time varying gain;adaptive filters;weight-transfer strategy;acoustic feedback path;speech quality;Probes;Hearing aids;Acoustics;Convergence;Delays;Adaptation models;Signal to noise ratio;Hearing aids;acoustic feedback;NLMS algorithm;probe noise},\n  doi = {10.1109/EUSIPCO.2016.7760376},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570254290.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a probe signal-based method for continuous acoustic adaptive feedback cancellation (AFC) in digital hearing aids. The main idea is to incorporate a time varying gain with the probe signal, such that a high level probe noise is injected during the transient state, and a low level probe signal is used after the system has converged. The proposed method is essentially based on two adaptive filters working in tandem. The weights between these two adaptive filters are exchanged by an efficient weight-transfer strategy, such that both adaptive filters give a good estimate of the acoustic feedback path. Simulation results demonstrate that the proposed method achieves good modeling accuracy, preserves good speech quality, and maintains high output SNR at the steady-state.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On plenoptic sub-aperture view recovery.\n \n \n \n \n\n\n \n Seifi, M.; Sabater, N.; Drazic, V.; and Pérez, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 893-897, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760377,\n  author = {M. Seifi and N. Sabater and V. Drazic and P. Pérez},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On plenoptic sub-aperture view recovery},\n  year = {2016},\n  pages = {893-897},\n  abstract = {Light field imaging is recently made available to the mass market by Lytro and Raytrix commercial cameras. Thanks to a grid of microlenses put in front of the sensor, a plenoptic camera simultaneously captures several images of the scene under different viewing angles, providing an enormous advantage for post-capture applications, e.g., depth estimation and image refocusing. In this paper, we propose a fast framework to re-grid, denoise and up-sample the data of any plenoptic camera. The proposed method relies on the prior sub-pixel estimation of micro-images centers and of inter-views disparities. Both objective and subjective experiments show the improved quality of our results in terms of preserving high frequencies and reducing noise and artifacts in low frequency content. Since the recovery of the pixels is independent of one another, the algorithm is highly parallelizable on GPU.},\n  keywords = {cameras;graphics processing units;image denoising;image sampling;subaperture view recovery;light field imaging;Raytrix commercial camera;Lytro commercial camera;microlens;sensor;plenoptic camera;post-capture application;depth estimation;image refocusing;data up-sample;data denoising;data regrid;subpixel estimation;microimage center;noise reduction;GPU;Estimation;Additives;Cameras;Lenses;Noise reduction;Measurement;Microoptics},\n  doi = {10.1109/EUSIPCO.2016.7760377},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251589.pdf},\n}\n\n
\n
\n\n\n
\n Light field imaging is recently made available to the mass market by Lytro and Raytrix commercial cameras. Thanks to a grid of microlenses put in front of the sensor, a plenoptic camera simultaneously captures several images of the scene under different viewing angles, providing an enormous advantage for post-capture applications, e.g., depth estimation and image refocusing. In this paper, we propose a fast framework to re-grid, denoise and up-sample the data of any plenoptic camera. The proposed method relies on the prior sub-pixel estimation of micro-images centers and of inter-views disparities. Both objective and subjective experiments show the improved quality of our results in terms of preserving high frequencies and reducing noise and artifacts in low frequency content. Since the recovery of the pixels is independent of one another, the algorithm is highly parallelizable on GPU.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Impact of light field compression on focus stack and extended focus images.\n \n \n \n \n\n\n \n Rizkallah, M.; Maugey, T.; Yaacoub, C.; and Guillemot, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 898-902, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ImpactPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760378,\n  author = {M. Rizkallah and T. Maugey and C. Yaacoub and C. Guillemot},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Impact of light field compression on focus stack and extended focus images},\n  year = {2016},\n  pages = {898-902},\n  abstract = {Light Fields capturing all light rays at every point in space and in all directions contain very rich information about the scene. This rich description of the scene enables advanced image creation capabilities, such as re-focusing or extended depth of field from a single capture. But, it yields a very high volume of data which needs compression. This paper studies the impact of Light Fields compression on two key functionalities: refocusing and extended focus. The sub-aperture images forming the Light Field are compressed as a video sequence with HEVC. A focus stack and the scene depth map are computed from the compressed light field and are used to render an image with an extended depth of field (called the extended focus image). It has been first observed that the Light Field could be compressed with a factor up to 700 without significantly affecting the visual quality of both refocused and extended focus images. To further analyze the compression effect, a dedicated quality evaluation method based on contrast and gradient measurements is considered to differentiate the natural geometrical blur from the blur resulting from compression. As a second part of the experiments, it is shown that the texture distortion of the in-focus regions in the focus stacks is the main cause of the quality degradation in the extended focus and that the depth errors do not impact the extended focus quality unless the light field is significantly distorted with a compression ratio of around 2000:1.},\n  keywords = {data compression;gradient methods;image capture;video coding;quality degradation;natural geometrical blur;gradient measurement;contrast measurement;quality evaluation method;refocused image;visual quality;HEVC;video sequence;subaperture image;light ray capturing;extended focus image;focus stack;light field compression impact;Image coding;Lenses;Measurement;Cameras;Estimation;Europe;Signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760378},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251611.pdf},\n}\n\n
\n
\n\n\n
\n Light Fields capturing all light rays at every point in space and in all directions contain very rich information about the scene. This rich description of the scene enables advanced image creation capabilities, such as re-focusing or extended depth of field from a single capture. But, it yields a very high volume of data which needs compression. This paper studies the impact of Light Fields compression on two key functionalities: refocusing and extended focus. The sub-aperture images forming the Light Field are compressed as a video sequence with HEVC. A focus stack and the scene depth map are computed from the compressed light field and are used to render an image with an extended depth of field (called the extended focus image). It has been first observed that the Light Field could be compressed with a factor up to 700 without significantly affecting the visual quality of both refocused and extended focus images. To further analyze the compression effect, a dedicated quality evaluation method based on contrast and gradient measurements is considered to differentiate the natural geometrical blur from the blur resulting from compression. As a second part of the experiments, it is shown that the texture distortion of the in-focus regions in the focus stacks is the main cause of the quality degradation in the extended focus and that the depth errors do not impact the extended focus quality unless the light field is significantly distorted with a compression ratio of around 2000:1.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 3D point cloud segmentation oriented to the analysis of interactions.\n \n \n \n \n\n\n \n Lin, X.; Casas, J. R.; and Pardás, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 903-907, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"3DPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760379,\n  author = {X. Lin and J. R. Casas and M. Pardás},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {3D point cloud segmentation oriented to the analysis of interactions},\n  year = {2016},\n  pages = {903-907},\n  abstract = {Given the widespread availability of point cloud data from consumer depth sensors, 3D point cloud segmentation becomes a promising building block for high level applications such as scene understanding and interaction analysis. It benefits from the richer information contained in real world 3D data compared to 2D images. This also implies that the classical color segmentation challenges have shifted to RGBD data, and new challenges have also emerged as the depth information is usually noisy, sparse and unorganized. Meanwhile, the lack of 3D point cloud ground truth labeling also limits the development and comparison among methods in 3D point cloud segmentation. In this paper, we present two contributions: a novel graph based point cloud segmentation method for RGBD stream data with interacting objects and a new ground truth labeling for a previously published data set [1]. This data set focuses on interaction (merge and split between `object' point clouds), which differentiates itself from the few existing labeled RGBD data sets which are more oriented to Simultaneous Localization And Mapping (SLAM) tasks. The proposed point cloud segmentation method is evaluated with the 3D point cloud ground truth labeling. Experiments show the promising result of our approach.},\n  keywords = {computer graphics;graph theory;image colour analysis;image segmentation;3D point cloud segmentation;point cloud data;color segmentation;depth information;noisy information;sparse information;unorganized information;ground truth labeling;graph based point cloud segmentation;RGBD stream data;labeled RGBD data sets;Three-dimensional displays;Labeling;Image segmentation;Image color analysis;Two dimensional displays;Shape;Histograms},\n  doi = {10.1109/EUSIPCO.2016.7760379},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251637.pdf},\n}\n\n
\n
\n\n\n
\n Given the widespread availability of point cloud data from consumer depth sensors, 3D point cloud segmentation becomes a promising building block for high level applications such as scene understanding and interaction analysis. It benefits from the richer information contained in real world 3D data compared to 2D images. This also implies that the classical color segmentation challenges have shifted to RGBD data, and new challenges have also emerged as the depth information is usually noisy, sparse and unorganized. Meanwhile, the lack of 3D point cloud ground truth labeling also limits the development and comparison among methods in 3D point cloud segmentation. In this paper, we present two contributions: a novel graph based point cloud segmentation method for RGBD stream data with interacting objects and a new ground truth labeling for a previously published data set [1]. This data set focuses on interaction (merge and split between `object' point clouds), which differentiates itself from the few existing labeled RGBD data sets which are more oriented to Simultaneous Localization And Mapping (SLAM) tasks. The proposed point cloud segmentation method is evaluated with the 3D point cloud ground truth labeling. Experiments show the promising result of our approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Torso orientation: A new clue for occlusion-aware human pose estimation.\n \n \n \n \n\n\n \n Yu, Y.; Yang, B.; and Yuen, P. C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 908-912, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"TorsoPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760380,\n  author = {Y. Yu and B. Yang and P. C. Yuen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Torso orientation: A new clue for occlusion-aware human pose estimation},\n  year = {2016},\n  pages = {908-912},\n  abstract = {Self-occlusion is a challenging problem existing in human pose estimation. In this paper we exploit a new cue to solve this problem: the torso orientation. We describe a technique to automatically detect self-occlusion in training set without visibility label. Given this prior information, we are able to jointly learn an occlusion-aware model to capture the pattern of self-occluded body parts. We evaluate our model on two major datasets, which are both publicly available. The experiment result shows that our model is quite competitive in both of the datasets with the state-of-the-arts. By this way, we illustrate our model's robustness to the self-occlusion problem in human pose estimation.},\n  keywords = {computer vision;pose estimation;occlusion-aware human pose estimation;self-occlusion problem;computer vision;Torso;Pose estimation;Biological system modeling;Mathematical model;Computational modeling;Cameras;Europe;computer vision;pose estimation;articulated model;self-occlusion},\n  doi = {10.1109/EUSIPCO.2016.7760380},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251713.pdf},\n}\n\n
\n
\n\n\n
\n Self-occlusion is a challenging problem existing in human pose estimation. In this paper we exploit a new cue to solve this problem: the torso orientation. We describe a technique to automatically detect self-occlusion in training set without visibility label. Given this prior information, we are able to jointly learn an occlusion-aware model to capture the pattern of self-occluded body parts. We evaluate our model on two major datasets, which are both publicly available. The experiment result shows that our model is quite competitive in both of the datasets with the state-of-the-arts. By this way, we illustrate our model's robustness to the self-occlusion problem in human pose estimation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Eliminating blocking artifacts in halftoning-based block truncation coding.\n \n \n \n \n\n\n \n Xu, Z.; and Chan, Y.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 913-917, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EliminatingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760381,\n  author = {Z. Xu and Y. Chan},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Eliminating blocking artifacts in halftoning-based block truncation coding},\n  year = {2016},\n  pages = {913-917},\n  abstract = {Block Truncation Coding (BTC) is an effective lossy image coding technique that enjoys both high efficiency and low complexity especially when halftoning techniques are employed to shape the noise spectrum of its output. However, due to its block-based nature, blocking artifacts are commonly found in the coding outputs. In this work, a real-time halftoning-based BTC algorithm is proposed to solve this problem by eliminating the cause of blocking artifacts while maintaining a complexity comparable to the best state-of-the-art halftoning-based BTC algorithm. Both objective and subjective comparisons demonstrate the visual quality improvement in its encoding outputs.},\n  keywords = {block codes;image coding;blocking artifacts;halftoning-based block truncation coding;lossy image coding technique;halftoning-based BTC algorithm;visual quality;encoding;Signal processing algorithms;Encoding;Interpolation;Image coding;Complexity theory;Algorithm design and analysis;Signal processing;Block truncation coding;halftoning;blocking artifacts},\n  doi = {10.1109/EUSIPCO.2016.7760381},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255825.pdf},\n}\n\n
\n
\n\n\n
\n Block Truncation Coding (BTC) is an effective lossy image coding technique that enjoys both high efficiency and low complexity especially when halftoning techniques are employed to shape the noise spectrum of its output. However, due to its block-based nature, blocking artifacts are commonly found in the coding outputs. In this work, a real-time halftoning-based BTC algorithm is proposed to solve this problem by eliminating the cause of blocking artifacts while maintaining a complexity comparable to the best state-of-the-art halftoning-based BTC algorithm. Both objective and subjective comparisons demonstrate the visual quality improvement in its encoding outputs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Pandora: Description of a painting database for art movement recognition with baselines and perspectives.\n \n \n \n \n\n\n \n Florea, C.; Condorovici, R.; Vertan, C.; Butnaru, R.; Florea, L.; and Vrânceanu, R.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 918-922, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Pandora:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760382,\n  author = {C. Florea and R. Condorovici and C. Vertan and R. Butnaru and L. Florea and R. Vrânceanu},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Pandora: Description of a painting database for art movement recognition with baselines and perspectives},\n  year = {2016},\n  pages = {918-922},\n  abstract = {To facilitate computer analysis of visual art, in the form of paintings, we introduce Pandora (Paintings Dataset for Recognizing the Art movement) database, a collection of digitized paintings labelled with respect to the artistic movement. Noting that the set of databases available as benchmarks for evaluation is highly reduced and most existing ones are limited in variability and number of images, we propose a novel large scale dataset of digital paintings. The database consists of more than 7700 images from 12 art movements. Each genre is illustrated by a number of images varying from 250 to nearly 1000. We investigate how local and global features and classification systems are able to recognize the art movement. Our experimental results suggest that accurate recognition is achievable by a combination of various categories.},\n  keywords = {feature extraction;image classification;Pandora;painting database description;art movement recognition;visual art computer analysis;image recognition;digital painting large scale dataset;classification system;global feature;local feature;Art;Databases;Painting;Image color analysis;Histograms;Support vector machines;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760382},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255907.pdf},\n}\n\n
\n
\n\n\n
\n To facilitate computer analysis of visual art, in the form of paintings, we introduce Pandora (Paintings Dataset for Recognizing the Art movement) database, a collection of digitized paintings labelled with respect to the artistic movement. Noting that the set of databases available as benchmarks for evaluation is highly reduced and most existing ones are limited in variability and number of images, we propose a novel large scale dataset of digital paintings. The database consists of more than 7700 images from 12 art movements. Each genre is illustrated by a number of images varying from 250 to nearly 1000. We investigate how local and global features and classification systems are able to recognize the art movement. Our experimental results suggest that accurate recognition is achievable by a combination of various categories.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Motion hints compensated prediction as a reference frame for high efficiency video coding (HEVC).\n \n \n \n \n\n\n \n Ahmmed, A.; Hannuksela, M. M.; and Gabbouj, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 923-927, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MotionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760383,\n  author = {A. Ahmmed and M. M. Hannuksela and M. Gabbouj},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Motion hints compensated prediction as a reference frame for high efficiency video coding (HEVC)},\n  year = {2016},\n  pages = {923-927},\n  abstract = {Segment-based temporal prediction combined with higher-order motion models have been studied as an alternative to conventional block-based translational inter prediction. One example of such studies is known as motion hints, where an affine motion model has been used. In this paper, we explore the applicability of motion hints with an elastic motion model in generating reference frames for conventionally coded B-frames. The presented design enables the re-use of existing codecs, such as HEVC, without modifications in low-level coding tools. Experimental results show that a savings in bit rate of up to 5.1% is achievable over standalone HEVC with increased computational complexity.},\n  keywords = {computational complexity;motion compensation;video coding;motion hint compensated prediction;high efficiency video coding;HEVC;segment-based temporal prediction;higher-order motion model;affine motion model;elastic motion model;reference frame generation;computational complexity;Motion segmentation;Encoding;Video coding;Standards;Predictive models;Bit rate;Motion hint;HEVC;elastic motion model;video coding},\n  doi = {10.1109/EUSIPCO.2016.7760383},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256045.pdf},\n}\n\n
\n
\n\n\n
\n Segment-based temporal prediction combined with higher-order motion models have been studied as an alternative to conventional block-based translational inter prediction. One example of such studies is known as motion hints, where an affine motion model has been used. In this paper, we explore the applicability of motion hints with an elastic motion model in generating reference frames for conventionally coded B-frames. The presented design enables the re-use of existing codecs, such as HEVC, without modifications in low-level coding tools. Experimental results show that a savings in bit rate of up to 5.1% is achievable over standalone HEVC with increased computational complexity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rare signals detection in nonwhite noise environment based on multidimensional signal subspace for hyperspectral image.\n \n \n \n \n\n\n \n Bourennane, S.; and Fossati, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 928-932, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RarePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760384,\n  author = {S. Bourennane and C. Fossati},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Rare signals detection in nonwhite noise environment based on multidimensional signal subspace for hyperspectral image},\n  year = {2016},\n  pages = {928-932},\n  abstract = {Due to the resolution limitation of the hyperspectral imaging sensors, some targets only take one or two pixels in the hyperspectral image (HSI). These targets are called small targets. When HSI is impaired and has to be denoised, small targets might be suppressed in the denoising process, which can degrade the detection performance. In this paper, we propose a method to improve the small target detection performance of HSI which is damaged by nonwhite thermal noise. One recently proposed nonwhite noise reduction algorithm prewhitening-multiway-Wiener-Filter (PMWF) and a usually used spatial-domain wavelet packet transform with SureShrinkage (SWPT-SURE) denoising approach are compared with the algorithm proposed in this paper. Finally, a real-world HYDICE HSI is employed to investigate the noise whitening capability and the improvement of small target detection performance.},\n  keywords = {hyperspectral imaging;image denoising;image filtering;image resolution;object detection;signal detection;wavelet transforms;Wiener filters;rare signal detection;nonwhite noise environment;multidimensional signal subspace;hyperspectral imaging sensor resolution limitation;HSI;small target detection per- formance improve;nonwhite thermal noise;nonwhite noise reduction algorithm;prewhitening-multiway Wiener filter;PMWF;spatial-domain wavelet packet transform;SureShrinkage denoising approach;SWPT-SURE denoising approach;real-world HYDICE HSI;Noise reduction;Wavelet transforms;Tensile stress;Hyperspectral imaging;Object detection;Hyperspectral image;Target detection;nonwhite noise;multiway filtering;wavelet packet transform},\n  doi = {10.1109/EUSIPCO.2016.7760384},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256110.pdf},\n}\n\n
\n
\n\n\n
\n Due to the resolution limitation of the hyperspectral imaging sensors, some targets only take one or two pixels in the hyperspectral image (HSI). These targets are called small targets. When HSI is impaired and has to be denoised, small targets might be suppressed in the denoising process, which can degrade the detection performance. In this paper, we propose a method to improve the small target detection performance of HSI which is damaged by nonwhite thermal noise. One recently proposed nonwhite noise reduction algorithm prewhitening-multiway-Wiener-Filter (PMWF) and a usually used spatial-domain wavelet packet transform with SureShrinkage (SWPT-SURE) denoising approach are compared with the algorithm proposed in this paper. Finally, a real-world HYDICE HSI is employed to investigate the noise whitening capability and the improvement of small target detection performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A fixed-point local tone mapping operation for HDR images.\n \n \n \n \n\n\n \n Dobashi, T.; Iwahashi, M.; and Kiya, H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 933-937, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760385,\n  author = {T. Dobashi and M. Iwahashi and H. Kiya},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A fixed-point local tone mapping operation for HDR images},\n  year = {2016},\n  pages = {933-937},\n  abstract = {This paper proposes a fixed-point local tone mapping operation (TMO) for high dynamic range (HDR) images. A TMO is classified in two types: local and global. Although a local TMO offers better results than global one, it requires more resources such as a computational cost and memory space. The proposed method uses fixed-point arithmetic with short data to solve this problem. The method uses an intermediate format which is composed of 8-bit mantissa part and 8-bit exponent part instead of IEEE754 standard floating-point format. Moreover, these mantissa part and exponent part are processed separately as two integer numbers. As a result, the method reduces the memory space. In addition, the method also reduces numerical range of calculations; it facilitates to implement the method with fixed-point arithmetic. The experimental results show that the method reduces the memory and computational costs, and offers high quality of tone mapped images comparable to the conventional method.},\n  keywords = {fixed point arithmetic;image processing;fixed-point local tone mapping operation;HDR image;TMO;high dynamic range image;memory space reduction;fixed-point arithmetic;8-bit mantissa part;8-bit exponent part;computational cost reduction;Computational efficiency;Europe;Convolution;Dynamic range;Manganese;Graphics processing units},\n  doi = {10.1109/EUSIPCO.2016.7760385},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256121.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a fixed-point local tone mapping operation (TMO) for high dynamic range (HDR) images. A TMO is classified in two types: local and global. Although a local TMO offers better results than global one, it requires more resources such as a computational cost and memory space. The proposed method uses fixed-point arithmetic with short data to solve this problem. The method uses an intermediate format which is composed of 8-bit mantissa part and 8-bit exponent part instead of IEEE754 standard floating-point format. Moreover, these mantissa part and exponent part are processed separately as two integer numbers. As a result, the method reduces the memory space. In addition, the method also reduces numerical range of calculations; it facilitates to implement the method with fixed-point arithmetic. The experimental results show that the method reduces the memory and computational costs, and offers high quality of tone mapped images comparable to the conventional method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Blind rectification of radial distortion by line straightness.\n \n \n \n \n\n\n \n Benligiray, B.; and Topal, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 938-942, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"BlindPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760386,\n  author = {B. Benligiray and C. Topal},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Blind rectification of radial distortion by line straightness},\n  year = {2016},\n  pages = {938-942},\n  abstract = {Lens distortion self-calibration estimates the distortion model using arbitrary images captured by a camera. The estimated model is then used to rectify images taken with the same camera. These methods generally use the fact that built environments are line dominated and these lines correspond to lines on the image when distortion is not present. The proposed method starts by detecting groups of lines whose real world correspondences are likely to be collinear. These line groups are rectified, then a novel error function is calculated to estimate the amount of remaining distortion. These steps are repeated iteratively until suitable distortion parameters are found. A feature selection method is used to eliminate the line groups that are not collinear in the real world. The method is demonstrated to successfully rectify real images of cluttered scenes in a fully automatic manner.},\n  keywords = {calibration;distortion;feature selection;iterative methods;lens distortion self-calibration;distortion model;arbitrary images;line groups;error function;distortion parameters;feature selection method;real images;cluttered scenes;blind rectification;radial distortion;line straightness;Nonlinear distortion;Image segmentation;Cameras;Calibration;Estimation;Feature extraction;Camera calibration;radial distortion;plumb-line method;distortion rectification;self-calibration},\n  doi = {10.1109/EUSIPCO.2016.7760386},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256193.pdf},\n}\n\n
\n
\n\n\n
\n Lens distortion self-calibration estimates the distortion model using arbitrary images captured by a camera. The estimated model is then used to rectify images taken with the same camera. These methods generally use the fact that built environments are line dominated and these lines correspond to lines on the image when distortion is not present. The proposed method starts by detecting groups of lines whose real world correspondences are likely to be collinear. These line groups are rectified, then a novel error function is calculated to estimate the amount of remaining distortion. These steps are repeated iteratively until suitable distortion parameters are found. A feature selection method is used to eliminate the line groups that are not collinear in the real world. The method is demonstrated to successfully rectify real images of cluttered scenes in a fully automatic manner.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Puma: A high-quality retinex-based tone mapping operator.\n \n \n \n \n\n\n \n Banić, N.; and Lončarić, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 943-947, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Puma:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760387,\n  author = {N. Banić and S. Lončarić},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Puma: A high-quality retinex-based tone mapping operator},\n  year = {2016},\n  pages = {943-947},\n  abstract = {Tone mapping is the process of compressing high dynamic range (HDR) images to obtain low dynamic range (LDR) images in order to display them on standard display devices. The methods that perform tone mapping also known as tone mapping operators (TMOs) can be global and process all luminances in the same way, or they can be local and process the luminances with respect to their closer neighborhood. While the former tend to be faster, the latter are known to produce results of significantly higher quality. In this paper perceptually-based tone mapping is combined with one of the latest Retinex-based methods to create a high-quality TMO. The new TMO requires only a constant number of steps per pixel and experimental results show that it outperforms all but one state-of-the-art TMOs in terms of tone mapped LDR image quality. The source code is available at http://www.fer.unizg.hr/ipg/resources/color_constancy/.},\n  keywords = {brightness;data compression;display devices;image coding;tone mapped LDR image quality;high-quality TMO;perceptually-based tone mapping;luminances;standard display devices;LDR images;low dynamic range images;HDR image compression;high dynamic range image compression;high-quality retinex-based tone mapping operator;Puma;Brightness;Dynamic range;Image quality;Europe;Signal processing;Image coding;Indexes;HDR;image enhancement;LDR;Naka-Rushton equation;Retinex;tone mapping operator},\n  doi = {10.1109/EUSIPCO.2016.7760387},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256197.pdf},\n}\n\n
\n
\n\n\n
\n Tone mapping is the process of compressing high dynamic range (HDR) images to obtain low dynamic range (LDR) images in order to display them on standard display devices. The methods that perform tone mapping also known as tone mapping operators (TMOs) can be global and process all luminances in the same way, or they can be local and process the luminances with respect to their closer neighborhood. While the former tend to be faster, the latter are known to produce results of significantly higher quality. In this paper perceptually-based tone mapping is combined with one of the latest Retinex-based methods to create a high-quality TMO. The new TMO requires only a constant number of steps per pixel and experimental results show that it outperforms all but one state-of-the-art TMOs in terms of tone mapped LDR image quality. The source code is available at http://www.fer.unizg.hr/ipg/resources/color_constancy/.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sequential image completion for high-speed large-pixel number sensing.\n \n \n \n \n\n\n \n Hirabayashi, A.; Nogami, N.; Ijiri, T.; and Condat, L.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 948-952, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SequentialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760388,\n  author = {A. Hirabayashi and N. Nogami and T. Ijiri and L. Condat},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Sequential image completion for high-speed large-pixel number sensing},\n  year = {2016},\n  pages = {948-952},\n  abstract = {We propose an algorithm that enhances the number of pixels for high-speed camera imaging to suppress its main problem. That is, the number of pixels reduces when the number of frames per second (fps) increases. To this end, we suppose an optical setup that block-randomly selects some percent of pixels in an image. Then, the proposed algorithm reconstructs the entire image from the selected partial pixels. In this algorithm, two types of sparsity are exploited. One is within each frame and the other is induced from the similarity between adjacent frames. The latter further means not only in the image domain but also in a sparsifying transformed domain. Since the cost function we define is convex, we can find the optimal solution using a convex optimization technique with small computational cost. Simulation results show that the proposed method outperforms the standard approach for image completion by the nuclear norm minimization.},\n  keywords = {cameras;convex programming;image capture;minimisation;high-speed large-pixel number sensing;sequential image completion;high-speed camera imaging;pixel suppression;image domain;nuclear norm minimization;convex optimization technique;sparsifying transformed domain;Cameras;Image reconstruction;Signal processing algorithms;Minimization;Cost function;Discrete cosine transforms;Signal processing;High-speed camera;sparsity;compressed sensing;image completion;convex optimization},\n  doi = {10.1109/EUSIPCO.2016.7760388},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256448.pdf},\n}\n\n
\n
\n\n\n
\n We propose an algorithm that enhances the number of pixels for high-speed camera imaging to suppress its main problem. That is, the number of pixels reduces when the number of frames per second (fps) increases. To this end, we suppose an optical setup that block-randomly selects some percent of pixels in an image. Then, the proposed algorithm reconstructs the entire image from the selected partial pixels. In this algorithm, two types of sparsity are exploited. One is within each frame and the other is induced from the similarity between adjacent frames. The latter further means not only in the image domain but also in a sparsifying transformed domain. Since the cost function we define is convex, we can find the optimal solution using a convex optimization technique with small computational cost. Simulation results show that the proposed method outperforms the standard approach for image completion by the nuclear norm minimization.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards overflow/underflow free PEE reversible watermarking.\n \n \n \n \n\n\n \n Dragoi, I.; and Coltuc, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 953-957, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760389,\n  author = {I. Dragoi and D. Coltuc},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Towards overflow/underflow free PEE reversible watermarking},\n  year = {2016},\n  pages = {953-957},\n  abstract = {The paper proposes an original prediction error expansion (PEE) reversible watermarking scheme that exploits the sign of the prediction error in order to prevent the overflow/underflow of the pixels close to the bounds of the graylevel range. More precisely, the pixels close to black are directly embedded by PEE when the prediction error is positive. Similarly, the ones close to white are embedded in case of negative errors. When the prediction error sign does not allow error expansion, an original solution is proposed. The pixel is left unchanged and one of its upper diagonal neighbors is embedded by PEE, but with a different prediction error. The diagonal neighbor is selected in order to prevent overflow/underflow. The proposed prediction error strategy ensures detection and reversibility without any additional information. The scheme is of interest for images with large areas of black and white pixels as, for instance, medical images.},\n  keywords = {error analysis;image watermarking;prediction theory;medical image;upper diagonal neighbor;negative error;graylevel range;pixel;prediction error expansion;overflow/underflow free PEE reversible watermarking;Watermarking;Context;Europe;Biomedical imaging;X-ray imaging;Distortion},\n  doi = {10.1109/EUSIPCO.2016.7760389},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256235.pdf},\n}\n\n
\n
\n\n\n
\n The paper proposes an original prediction error expansion (PEE) reversible watermarking scheme that exploits the sign of the prediction error in order to prevent the overflow/underflow of the pixels close to the bounds of the graylevel range. More precisely, the pixels close to black are directly embedded by PEE when the prediction error is positive. Similarly, the ones close to white are embedded in case of negative errors. When the prediction error sign does not allow error expansion, an original solution is proposed. The pixel is left unchanged and one of its upper diagonal neighbors is embedded by PEE, but with a different prediction error. The diagonal neighbor is selected in order to prevent overflow/underflow. The proposed prediction error strategy ensures detection and reversibility without any additional information. The scheme is of interest for images with large areas of black and white pixels as, for instance, medical images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Physical layer security: Friendly jamming in an untrusted relay scenario.\n \n \n \n \n\n\n \n Ali, B.; Zamir, N.; Fasih, M.; Butt, U.; and Ng, S. X.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 958-962, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"PhysicalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760390,\n  author = {B. Ali and N. Zamir and M. Fasih and U. Butt and S. X. Ng},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Physical layer security: Friendly jamming in an untrusted relay scenario},\n  year = {2016},\n  pages = {958-962},\n  abstract = {This paper investigates the achievable secrecy regions when employing a friendly jammer in a cooperative scenario employing an untrusted relay. The untrusted relay which helps to forward the source signal towards the destination, could also be regarded as a potential eavesdropper. Our system employs a friendly jammer which sends a known noise signal towards the relay. In this paper, we investigate the effect of jammer and relay locations on the achievable secrecy rate. We consider two scenarios where in the first case we consider no direct transmission between the source and destination, while in the second case we include a source to destination direct link in our communication system.},\n  keywords = {jamming;radio links;relay networks (telecommunication);telecommunication security;communication system;source to destination direct link;cooperative scenario;untrusted relay scenario;jamming;physical layer security;Relays;Jamming;Signal to noise ratio;Security;Europe;Resource management},\n  doi = {10.1109/EUSIPCO.2016.7760390},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256249.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates the achievable secrecy regions when employing a friendly jammer in a cooperative scenario employing an untrusted relay. The untrusted relay which helps to forward the source signal towards the destination, could also be regarded as a potential eavesdropper. Our system employs a friendly jammer which sends a known noise signal towards the relay. In this paper, we investigate the effect of jammer and relay locations on the achievable secrecy rate. We consider two scenarios where in the first case we consider no direct transmission between the source and destination, while in the second case we include a source to destination direct link in our communication system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On exploiting co-channel interference to improve secret communications over a wiretap channel.\n \n \n \n \n\n\n \n Li, L.; Petropulu, A. P.; and Chen, Z.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 963-967, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760391,\n  author = {L. Li and A. P. Petropulu and Z. Chen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On exploiting co-channel interference to improve secret communications over a wiretap channel},\n  year = {2016},\n  pages = {963-967},\n  abstract = {This paper considers a network in which a source-destination pair needs to establish a confidential connection against an external eavesdropper, aided by the interference generated by another source-destination pair that exchanges public messages. Our goal is to identify the secrecy rate performance benefits that can be brought by exploiting co-channel interference. We consider two scenarios: 1) the non-confidential pair designs its preceding matrix in favor of the confidential one, referred to as the altruistic scenario; 2) the non-confidential pair is selfish and it requires to communicate with its maximum achievable D.o.F. The maximum achievable S.D.o.F. of the wiretap channel for both scenarios is obtained in closed form. Based on these analytical expressions, we further determine the number of antennas needed at the non-confidential connection in order to achieve an S.D.o.F. for the wiretap channel equal to the degrees of freedom (D.o.F.).},\n  keywords = {cochannel interference;telecommunication security;co-channel interference;secret communications;wiretap channel;source-destination pair;confidential connection;external eavesdropper;public messages;secrecy rate performance benefits;nonconfidential pair;Europe;Signal processing;Conferences},\n  doi = {10.1109/EUSIPCO.2016.7760391},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256281.pdf},\n}\n\n
\n
\n\n\n
\n This paper considers a network in which a source-destination pair needs to establish a confidential connection against an external eavesdropper, aided by the interference generated by another source-destination pair that exchanges public messages. Our goal is to identify the secrecy rate performance benefits that can be brought by exploiting co-channel interference. We consider two scenarios: 1) the non-confidential pair designs its preceding matrix in favor of the confidential one, referred to as the altruistic scenario; 2) the non-confidential pair is selfish and it requires to communicate with its maximum achievable D.o.F. The maximum achievable S.D.o.F. of the wiretap channel for both scenarios is obtained in closed form. Based on these analytical expressions, we further determine the number of antennas needed at the non-confidential connection in order to achieve an S.D.o.F. for the wiretap channel equal to the degrees of freedom (D.o.F.).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Secure identification based on fuzzy commitment scheme for JPEG XR images.\n \n \n \n \n\n\n \n Iida, K.; Kobayashi, H.; and Kiya, H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 968-972, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SecurePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760392,\n  author = {K. Iida and H. Kobayashi and H. Kiya},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Secure identification based on fuzzy commitment scheme for JPEG XR images},\n  year = {2016},\n  pages = {968-972},\n  abstract = {A secure identification scheme for JPEG XR images is proposed in this paper. The aim is to securely identify JPEG XR images which are generated from the same original image under various compression levels. A property of the positive and negative signs of lapped biorthogonal transform coefficients is employed to achieve a robust scheme against JPEG XR compression. The proposed scheme is robust against a difference in compression levels, and does not produce false negative matches in any compression level. Existing conventional schemes having this property are not secure. To construct a secure identification system, we propose a novel identification system that consists of a new error correction technique and a fuzzy commitment scheme, which is a well-known biometric cryptosystem. The experimental results show the proposed scheme is effective for JPEG XR compressed video sequences in terms of the querying such as false negative and true positive matches, while keeping a high level of the security.},\n  keywords = {biometrics (access control);cryptography;data compression;error correction codes;fuzzy set theory;image sequences;video coding;JPEG XR video compression sequence;biometric cryptosystem;fuzzy commitment scheme;error correction technique;lapped biorthogonal transform coefficient positive sign;lapped biorthogonal transform coefficient negative sign;compression level;JPEG XR image;secure identification;Image coding;Transform coding;Transforms;Error correction;Feature extraction;Robustness;Authentication;JPEG XR;fuzzy commitment scheme;image identification},\n  doi = {10.1109/EUSIPCO.2016.7760392},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256438.pdf},\n}\n\n
\n
\n\n\n
\n A secure identification scheme for JPEG XR images is proposed in this paper. The aim is to securely identify JPEG XR images which are generated from the same original image under various compression levels. A property of the positive and negative signs of lapped biorthogonal transform coefficients is employed to achieve a robust scheme against JPEG XR compression. The proposed scheme is robust against a difference in compression levels, and does not produce false negative matches in any compression level. Existing conventional schemes having this property are not secure. To construct a secure identification system, we propose a novel identification system that consists of a new error correction technique and a fuzzy commitment scheme, which is a well-known biometric cryptosystem. The experimental results show the proposed scheme is effective for JPEG XR compressed video sequences in terms of the querying such as false negative and true positive matches, while keeping a high level of the security.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimal attack and defence of large scale networks using mean field theory.\n \n \n \n \n\n\n \n del Val , J.; Zazo, S.; Macua, S. V.; Zazo, J.; and Parras, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 973-977, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OptimalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760393,\n  author = {J. {del Val} and S. Zazo and S. V. Macua and J. Zazo and J. Parras},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Optimal attack and defence of large scale networks using mean field theory},\n  year = {2016},\n  pages = {973-977},\n  abstract = {We address the issue of large scale network security. It is known that traditional game theory becomes intractable when considering a large number of players, which is a realistic situation in today's networks where a centralized administration is not available. We propose a new model, based on mean field theory, that allows us to obtain optimal decentralised defence policy for any node in the network and optimal attack policy for an attacker. In this way we settle a promising framework for the development of a mean field game theory of large scale network security. We also present a case study with experimental results.},\n  keywords = {game theory;network theory (graphs);telecommunication security;large scale network security;optimal decentralised defence policy;optimal attack policy;mean field game theory;Security;Communication networks;Games;Game theory;Random variables;Europe;Signal processing;Dynamic programming;game theory;mean field;network;optimal control;security},\n  doi = {10.1109/EUSIPCO.2016.7760393},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256534.pdf},\n}\n\n
\n
\n\n\n
\n We address the issue of large scale network security. It is known that traditional game theory becomes intractable when considering a large number of players, which is a realistic situation in today's networks where a centralized administration is not available. We propose a new model, based on mean field theory, that allows us to obtain optimal decentralised defence policy for any node in the network and optimal attack policy for an attacker. In this way we settle a promising framework for the development of a mean field game theory of large scale network security. We also present a case study with experimental results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Guitar note onset detection based on a spectral sparsity measure.\n \n \n \n \n\n\n \n Mounir, M.; Karsmakers, P.; and van Waterschoot , T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 978-982, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"GuitarPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760394,\n  author = {M. Mounir and P. Karsmakers and T. {van Waterschoot}},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Guitar note onset detection based on a spectral sparsity measure},\n  year = {2016},\n  pages = {978-982},\n  abstract = {The detection of note onsets is gaining a growing interest in audio signal processing research due to its wide range of applications in music information retrieval. We propose a new note onset detection algorithm NINOS2 exploiting the spectral sparsity difference between different parts of a musical note. When compared to the popular state-of-the-art LogFiltSpecFlux algorithm, the proposed algorithm shows up to 61% better performance for automatically annotated guitar melodies as well as chord progressions. We also propose an additional performance measure to assess the relative position of detected onsets w.r.t. each other.},\n  keywords = {audio signal processing;information retrieval;music;musical instruments;guitar note onset detection;spectral sparsity measure;audio signal processing;music information retrieval;NINOS2;spectral sparsity difference;musical note;LogFiltSpecFlux algorithm;annotated guitar melodies;chord progressions;Music;Spectrogram;Transient analysis;Europe;Signal processing algorithms;Probabilistic logic},\n  doi = {10.1109/EUSIPCO.2016.7760394},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256369.pdf},\n}\n\n
\n
\n\n\n
\n The detection of note onsets is gaining a growing interest in audio signal processing research due to its wide range of applications in music information retrieval. We propose a new note onset detection algorithm NINOS2 exploiting the spectral sparsity difference between different parts of a musical note. When compared to the popular state-of-the-art LogFiltSpecFlux algorithm, the proposed algorithm shows up to 61% better performance for automatically annotated guitar melodies as well as chord progressions. We also propose an additional performance measure to assess the relative position of detected onsets w.r.t. each other.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-pitch estimation of audio recordings using a codebook-based approach.\n \n \n \n \n\n\n \n Hansen, M. W.; Jensen, J. R.; and Christensen, M. G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 983-987, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-pitchPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760395,\n  author = {M. W. Hansen and J. R. Jensen and M. G. Christensen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-pitch estimation of audio recordings using a codebook-based approach},\n  year = {2016},\n  pages = {983-987},\n  abstract = {In this paper, a method for multi-pitch estimation of single-channel mixtures of harmonic signals is presented. Using the method, it is possible to resolve amplitudes of overlapping harmonics, which is otherwise an ill-posed problem. The method is based on the extended invariance principle (EXIP), and a codebook consisting of realistic amplitude vectors. A nonlinear least squares (NLS) cost function is formed based on the observed signal and a parametric model of the signal, for a set of fundamental frequency candidates. For each of these, amplitude estimates are computed. The magnitudes of these estimates are quantized according to a codebook, and an updated cost function is used to estimate the fundamental frequencies of the sources. The performance of the proposed estimator is evaluated using synthetic and real mixtures, and the results show that the proposed method is able to estimate multiple pitches in a mixture of sources with overlapping harmonics.},\n  keywords = {audio recording;estimation theory;least squares approximations;vectors;audio recordings;multipitch estimation;codebook-based approach;single-channel mixtures;harmonic signals;overlapping harmonics amplitudes;ill-posed problem;extended invariance principle;realistic amplitude vectors;nonlinear least squares cost function;NLS cost function;fundamental frequency;synthetic mixtures;real mixtures;Harmonic analysis;Frequency estimation;Instruments;Estimation;Cost function;Matching pursuit algorithms;Europe;Multi-pitch estimation;amplitude estimation;vector quantization;music information retrieval},\n  doi = {10.1109/EUSIPCO.2016.7760395},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252228.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a method for multi-pitch estimation of single-channel mixtures of harmonic signals is presented. Using the method, it is possible to resolve amplitudes of overlapping harmonics, which is otherwise an ill-posed problem. The method is based on the extended invariance principle (EXIP), and a codebook consisting of realistic amplitude vectors. A nonlinear least squares (NLS) cost function is formed based on the observed signal and a parametric model of the signal, for a set of fundamental frequency candidates. For each of these, amplitude estimates are computed. The magnitudes of these estimates are quantized according to a codebook, and an updated cost function is used to estimate the fundamental frequencies of the sources. The performance of the proposed estimator is evaluated using synthetic and real mixtures, and the results show that the proposed method is able to estimate multiple pitches in a mixture of sources with overlapping harmonics.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online acoustic scene analysis based on nonparametric Bayesian model.\n \n \n \n \n\n\n \n Imoto, K.; and Ono, N.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 988-992, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnlinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760396,\n  author = {K. Imoto and N. Ono},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Online acoustic scene analysis based on nonparametric Bayesian model},\n  year = {2016},\n  pages = {988-992},\n  abstract = {In this paper, we propose a novel online method for analyzing acoustic scenes from sequentially obtained sounds. One prospective method for analyzing acoustic scenes is the use of a generative model of acoustic topics and event sequences in observed sounds, where the acoustic topic represents the latent structure of acoustic events associating an acoustic scene and acoustic events. This generative model is called an acoustic topic model (ATM). However, the conventional ATM employs a batch technique for estimating model parameters and cannot model sequentially obtained acoustic event sequences. Moreover, the number of classes of acoustic topics that lies in acoustic event sequences needs to be predetermined before observing acoustic events. However, the necessary number of acoustic topics for representing acoustic scenes varies in accordance with their contents, and this causes a mismatch between the actual number of classes of acoustic topics and the predetermined number of classes. In our method, the number of classes of acoustic topics can be automatically inferred from sequentially obtained acoustic event sequences on the basis of the online and nonparametric Bayesian technique. The experimental results of online acoustic scene estimation using real-life sounds indicated that the proposed method performed of acoustic scene classification better than the conventional ATM. In addition, the proposed method produced an efficient computation performance.},\n  keywords = {acoustic signal processing;Bayes methods;nonparametric statistics;signal classification;online acoustic scene analysis;nonparametric Bayesian model;event sequences;latent structure;acoustic topic model;ATM;acoustic event sequences;acoustic scene classification;Acoustics;Parameter estimation;Adaptation models;Analytical models;Image analysis;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760396},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255827.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a novel online method for analyzing acoustic scenes from sequentially obtained sounds. One prospective method for analyzing acoustic scenes is the use of a generative model of acoustic topics and event sequences in observed sounds, where the acoustic topic represents the latent structure of acoustic events associating an acoustic scene and acoustic events. This generative model is called an acoustic topic model (ATM). However, the conventional ATM employs a batch technique for estimating model parameters and cannot model sequentially obtained acoustic event sequences. Moreover, the number of classes of acoustic topics that lies in acoustic event sequences needs to be predetermined before observing acoustic events. However, the necessary number of acoustic topics for representing acoustic scenes varies in accordance with their contents, and this causes a mismatch between the actual number of classes of acoustic topics and the predetermined number of classes. In our method, the number of classes of acoustic topics can be automatically inferred from sequentially obtained acoustic event sequences on the basis of the online and nonparametric Bayesian technique. The experimental results of online acoustic scene estimation using real-life sounds indicated that the proposed method performed of acoustic scene classification better than the conventional ATM. In addition, the proposed method produced an efficient computation performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Unsupervised feature learning for Music Structural Analysis.\n \n \n \n \n\n\n \n Buccoli, M.; Zanoni, M.; Sarti, A.; Tubaro, S.; and Andreoletti, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 993-997, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"UnsupervisedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760397,\n  author = {M. Buccoli and M. Zanoni and A. Sarti and S. Tubaro and D. Andreoletti},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Unsupervised feature learning for Music Structural Analysis},\n  year = {2016},\n  pages = {993-997},\n  abstract = {Music Structural Analysis (MSA) algorithms analyze songs with the purpose of automatically retrieving their large-scale structure. They do so from a feature-based representation of the audio signal (e.g., MFCCs, chromagram), which is usually hand-designed for that specific application. In order to design a proper audio representation for MSA, we need to assess which musical properties are relevant for segmentation purposes (e.g., timbre, harmony); and we need to design signal processing strategies that can be used for capturing them. Deep learning techniques offer an alternative to this approach, as they are able to automatically find an abstract representation of the musical content. In this work we investigate their use in the task of Music Structural Analysis. In particular, we compare the performance of several state-of-the-art algorithms working with a collection of traditional descriptors and by descriptors that are extracted with a Deep Belief Network.},\n  keywords = {audio signal processing;feature extraction;music;signal representation;unsupervised learning;unsupervised feature learning;music structural analysis;MSA;feature-based representation;audio signal representation;signal processing strategy;deep learning technique;Feature extraction;Signal processing algorithms;Music;Algorithm design and analysis;Training;Signal processing;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760397},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252145.pdf},\n}\n\n
\n
\n\n\n
\n Music Structural Analysis (MSA) algorithms analyze songs with the purpose of automatically retrieving their large-scale structure. They do so from a feature-based representation of the audio signal (e.g., MFCCs, chromagram), which is usually hand-designed for that specific application. In order to design a proper audio representation for MSA, we need to assess which musical properties are relevant for segmentation purposes (e.g., timbre, harmony); and we need to design signal processing strategies that can be used for capturing them. Deep learning techniques offer an alternative to this approach, as they are able to automatically find an abstract representation of the musical content. In this work we investigate their use in the task of Music Structural Analysis. In particular, we compare the performance of several state-of-the-art algorithms working with a collection of traditional descriptors and by descriptors that are extracted with a Deep Belief Network.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analysis of distortion in audio signals introduced by microphone motion.\n \n \n \n \n\n\n \n Tourbabin, V.; and Rafaely, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 998-1002, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AnalysisPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760398,\n  author = {V. Tourbabin and B. Rafaely},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Analysis of distortion in audio signals introduced by microphone motion},\n  year = {2016},\n  pages = {998-1002},\n  abstract = {Signals recorded by microphones form the basis for a wide range of audio signal processing systems. In some applications, such as humanoid robots, the microphones may be moving while recording the audio signals. A common practice is to assume that the microphone is stationary within a short time frame. Although this assumption may be reasonable under some conditions, there is currently no theoretical framework that predicts the level of signal distortion due to motion as a function of system parameters. This paper presents such a framework, for linear and circular microphone motion, providing upper bounds on the motion-induced distortion, and showing that the dependence of this upper bound on motion speed, signal frequency, and time-frame duration, is linear. A simulation study of a humanoid robot rotating its head while recording a speech signal validates the theoretical results.},\n  keywords = {audio signal processing;humanoid robots;microphones;speech processing;distortion analysis;audio signal processing systems;humanoid robots;signal distortion;linear microphone motion;circular microphone motion;motion speed;signal frequency;time-frame duration;speech signal;Microphones;Distortion;Sensors;Angular velocity;Time-frequency analysis;Upper bound},\n  doi = {10.1109/EUSIPCO.2016.7760398},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570253476.pdf},\n}\n\n
\n
\n\n\n
\n Signals recorded by microphones form the basis for a wide range of audio signal processing systems. In some applications, such as humanoid robots, the microphones may be moving while recording the audio signals. A common practice is to assume that the microphone is stationary within a short time frame. Although this assumption may be reasonable under some conditions, there is currently no theoretical framework that predicts the level of signal distortion due to motion as a function of system parameters. This paper presents such a framework, for linear and circular microphone motion, providing upper bounds on the motion-induced distortion, and showing that the dependence of this upper bound on motion speed, signal frequency, and time-frame duration, is linear. A simulation study of a humanoid robot rotating its head while recording a speech signal validates the theoretical results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Speaker localization with moving microphone arrays.\n \n \n \n \n\n\n \n Yuval Dorfan, C. E.; Gannot, S.; and Naylor, P. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1003-1007, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SpeakerPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760399,\n  author = {C. E. {Yuval Dorfan} and S. Gannot and P. A. Naylor},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Speaker localization with moving microphone arrays},\n  year = {2016},\n  pages = {1003-1007},\n  abstract = {Speaker localization algorithms often assume static location for all sensors. This assumption simplifies the models used, since all acoustic transfer functions are linear time invariant. In many applications this assumption is not valid. In this paper we address the localization challenge with moving microphone arrays. We propose two algorithms to find the speaker position. The first approach is a batch algorithm based on the maximum likelihood criterion, optimized via expectation-maximization iterations. The second approach is a particle filter for sequential Bayesian estimation. The performance of both approaches is evaluated and compared for simulated reverberant audio data from a microphone array with two sensors.},\n  keywords = {audio signal processing;Bayes methods;expectation-maximisation algorithm;iterative methods;maximum likelihood sequence estimation;microphone arrays;particle filtering (numerical methods);speaker recognition;moving microphone array;speaker localization algorithm;acoustic transfer function;linear time invariant;speaker position;batch algorithm;maximum likelihood criterion;expectation-maximization iteration;particle filter;sequential Bayesian estimation;reverberant audio data;Signal processing algorithms;Microphone arrays;Heuristic algorithms;Maximum likelihood estimation;Sensor arrays},\n  doi = {10.1109/EUSIPCO.2016.7760399},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256254.pdf},\n}\n\n
\n
\n\n\n
\n Speaker localization algorithms often assume static location for all sensors. This assumption simplifies the models used, since all acoustic transfer functions are linear time invariant. In many applications this assumption is not valid. In this paper we address the localization challenge with moving microphone arrays. We propose two algorithms to find the speaker position. The first approach is a batch algorithm based on the maximum likelihood criterion, optimized via expectation-maximization iterations. The second approach is a particle filter for sequential Bayesian estimation. The performance of both approaches is evaluated and compared for simulated reverberant audio data from a microphone array with two sensors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Localization of moving microphone arrays from moving sound sources for robot audition.\n \n \n \n \n\n\n \n Evers, C.; Moore, A. H.; and Naylor, P. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1008-1012, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"LocalizationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760400,\n  author = {C. Evers and A. H. Moore and P. A. Naylor},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Localization of moving microphone arrays from moving sound sources for robot audition},\n  year = {2016},\n  pages = {1008-1012},\n  abstract = {Acoustic Simultaneous Localization and Mapping (a-SLAM) jointly localizes the trajectory of a microphone array installed on a moving platform, whilst estimating the acoustic map of surrounding sound sources, such as human speakers. Whilst traditional approaches for SLAM in the vision and optical research literature rely on the assumption that the surrounding map features are static, in the acoustic case the positions of talkers are usually time-varying due to head rotations and body movements. This paper demonstrates that tracking of moving sources can be incorporated in a-SLAM by modelling the acoustic map as a Random Finite Set (RFS) of multiple sources and explicitly imposing models of the source dynamics. The proposed approach is verified and its performance evaluated for realistic simulated data.},\n  keywords = {microphone arrays;SLAM (robots);acoustic map;random finite set;a-SLAM;acoustic simultaneous localization and mapping;robot audition;moving sound sources;moving microphone arrays localization;Robot sensing systems;Trajectory;Position measurement;Microphone arrays},\n  doi = {10.1109/EUSIPCO.2016.7760400},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256259.pdf},\n}\n\n
\n
\n\n\n
\n Acoustic Simultaneous Localization and Mapping (a-SLAM) jointly localizes the trajectory of a microphone array installed on a moving platform, whilst estimating the acoustic map of surrounding sound sources, such as human speakers. Whilst traditional approaches for SLAM in the vision and optical research literature rely on the assumption that the surrounding map features are static, in the acoustic case the positions of talkers are usually time-varying due to head rotations and body movements. This paper demonstrates that tracking of moving sources can be incorporated in a-SLAM by modelling the acoustic map as a Random Finite Set (RFS) of multiple sources and explicitly imposing models of the source dynamics. The proposed approach is verified and its performance evaluated for realistic simulated data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A one-step-ahead information-based feedback control for binaural active localization.\n \n \n \n \n\n\n \n Bustamante, G.; Danès, P.; Forgue, T.; and Podlubne, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1013-1017, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760401,\n  author = {G. Bustamante and P. Danès and T. Forgue and A. Podlubne},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A one-step-ahead information-based feedback control for binaural active localization},\n  year = {2016},\n  pages = {1013-1017},\n  abstract = {Fundamental limitations of binaural localization, such as front-back ambiguity or distance non-observability, can be overcome by combining the sensed audio signals with the sensor motor commands into “active” schemes. Such strategies can rely on stochastic filtering. In this context, this paper addresses the determination of an admissible motion of a binaural head leading, on average, to the one-step-ahead most informative localization. To this aim, a constrained optimization problem is set up, which consists in maximizing the entropy of the next predicted measurement probability density function over a cylindric admissible set. The proposed optimum policy is validated on real-life robotic experiments.},\n  keywords = {entropy;feedback;filtering theory;microphone arrays;stochastic processes;stochastic programming;binaural active localization;one-step-ahead information-based feedback control;sensor motor command;audio signal sensing;stochastic filtering;constrained optimization problem;entropy maximization;measurement probability density function;cylindric admissible set;Robot kinematics;Robot sensing systems;Entropy;Microphones;Time measurement;Noise measurement},\n  doi = {10.1109/EUSIPCO.2016.7760401},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256309.pdf},\n}\n\n
\n
\n\n\n
\n Fundamental limitations of binaural localization, such as front-back ambiguity or distance non-observability, can be overcome by combining the sensed audio signals with the sensor motor commands into “active” schemes. Such strategies can rely on stochastic filtering. In this context, this paper addresses the determination of an admissible motion of a binaural head leading, on average, to the one-step-ahead most informative localization. To this aim, a constrained optimization problem is set up, which consists in maximizing the entropy of the next predicted measurement probability density function over a cylindric admissible set. The proposed optimum policy is validated on real-life robotic experiments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Variational Bayesian multi-channel robust NMF for human-voice enhancement with a deformable and partially-occluded microphone array.\n \n \n \n \n\n\n \n Bando, Y.; Itoyama, K.; Konyo, M.; Tadokoro, S.; Nakadai, K.; Yoshii, K.; and Okuno, H. G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1018-1022, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"VariationalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760402,\n  author = {Y. Bando and K. Itoyama and M. Konyo and S. Tadokoro and K. Nakadai and K. Yoshii and H. G. Okuno},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Variational Bayesian multi-channel robust NMF for human-voice enhancement with a deformable and partially-occluded microphone array},\n  year = {2016},\n  pages = {1018-1022},\n  abstract = {This paper presents a human-voice enhancement method for a deformable and partially-occluded microphone array. Although microphone arrays distributed on the long bodies of hose-shaped rescue robots are crucial for finding victims under collapsed buildings, human voices captured by a microphone array are contaminated by non-stationary actuator and friction noise. Standard blind source separation methods cannot be used because the relative microphone positions change over time and some of them are occasionally shaded by rubble. To solve these problems, we develop a Bayesian model that separates multichannel amplitude spectrograms into sparse and low-rank components (human voice and noise) without using phase information, which depends on the array layout. The voice level at each microphone is estimated in a time-varying manner for reducing the influence of the shaded microphones. Experiments using a 3-m hose-shaped robot with eight microphones show that our method outperforms conventional methods by the signal-to-noise ratio of 2.7 dB.},\n  keywords = {Bayes methods;matrix decomposition;microphone arrays;rescue robots;speech enhancement;variational techniques;variational Bayesian multichannel robust NMF;human-voice enhancement;partially-occluded microphone array;deformable microphone array;hose-shaped rescue robot;nonstationary actuator;friction noise;signal-to-noise ratio;multichannel amplitude spectrogram;Spectrogram;Robots;Bayes methods;Human voice;Radio frequency;Microphone arrays},\n  doi = {10.1109/EUSIPCO.2016.7760402},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256442.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a human-voice enhancement method for a deformable and partially-occluded microphone array. Although microphone arrays distributed on the long bodies of hose-shaped rescue robots are crucial for finding victims under collapsed buildings, human voices captured by a microphone array are contaminated by non-stationary actuator and friction noise. Standard blind source separation methods cannot be used because the relative microphone positions change over time and some of them are occasionally shaded by rubble. To solve these problems, we develop a Bayesian model that separates multichannel amplitude spectrograms into sparse and low-rank components (human voice and noise) without using phase information, which depends on the array layout. The voice level at each microphone is estimated in a time-varying manner for reducing the influence of the shaded microphones. Experiments using a 3-m hose-shaped robot with eight microphones show that our method outperforms conventional methods by the signal-to-noise ratio of 2.7 dB.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Simulation of distributed contact in string instruments: A modal expansion approach.\n \n \n \n \n\n\n \n van Walstijn , M.; and Bridges, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1023-1027, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SimulationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760403,\n  author = {M. {van Walstijn} and J. Bridges},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Simulation of distributed contact in string instruments: A modal expansion approach},\n  year = {2016},\n  pages = {1023-1027},\n  abstract = {Impactive contact between a vibrating string and a barrier is a strongly nonlinear phenomenon that presents several challenges in the design of numerical models for simulation and sound synthesis of musical string instruments. These are addressed here by applying Hamiltonian methods to incorporate distributed contact forces into a modal framework for discrete-time simulation of the dynamics of a stiff, damped string. The resulting algorithms have spectral accuracy, are unconditionally stable, and require solving a multivariate nonlinear equation that is guaranteed to have a unique solution. Exemplifying results are presented and discussed in terms of accuracy, convergence, and spurious high-frequency oscillations.},\n  keywords = {acoustic signal processing;music;musical instruments;nonlinear equations;modal expansion approach;impactive contact;vibrating string;musical string instruments;sound synthesis;Hamiltonian method;distributed contact forces;modal framework;discrete-time simulation;stiff dynamics;damped string;spectral accuracy;multivariate nonlinear equation;high-frequency oscillations;Numerical models;Damping;Force;Mathematical model;Instruments;Numerical stability;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760403},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570254783.pdf},\n}\n\n
\n
\n\n\n
\n Impactive contact between a vibrating string and a barrier is a strongly nonlinear phenomenon that presents several challenges in the design of numerical models for simulation and sound synthesis of musical string instruments. These are addressed here by applying Hamiltonian methods to incorporate distributed contact forces into a modal framework for discrete-time simulation of the dynamics of a stiff, damped string. The resulting algorithms have spectral accuracy, are unconditionally stable, and require solving a multivariate nonlinear equation that is guaranteed to have a unique solution. Exemplifying results are presented and discussed in terms of accuracy, convergence, and spurious high-frequency oscillations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A k-d tree based solution cache for the non-linear equation of circuit simulations.\n \n \n \n \n\n\n \n Holters, M.; and Zölzer, U.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1028-1032, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760404,\n  author = {M. Holters and U. Zölzer},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A k-d tree based solution cache for the non-linear equation of circuit simulations},\n  year = {2016},\n  pages = {1028-1032},\n  abstract = {In the digital simulation of non-linear audio effect circuits, the arising non-linear equation generally poses the main challenge for a computationally cheap implementation. For any but the simplest circuits, using an iterative solver at execution time will be too slow, while exhaustive look-up tables quickly grow intolerably large. To better cope with the situation, in this paper we propose to store solutions non-uniformly sampled from the parameter space to enable an iterative solver to quickly converge when being started from the closest initial solution. Efficient look-up of this closest solution is realized by using a k-d tree. The method is supported by a step to reduce the dimension of the parameter space and a linear extrapolation from the closest solution stored to the actually needed parameter vector.},\n  keywords = {analogue circuits;circuit simulation;digital simulation;extrapolation;iterative methods;nonlinear equations;table lookup;trees (mathematics);k-d tree-based solution cache;nonlinear audio effect circuit;digital simulation;nonlinear circuit simulation equation;iterative solver;exhaustive look-up table;linear extrapolation;Mathematical model;Signal processing;Europe;Capacitors;Newton method;Table lookup;Nonlinear equations},\n  doi = {10.1109/EUSIPCO.2016.7760404},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255150.pdf},\n}\n\n
\n
\n\n\n
\n In the digital simulation of non-linear audio effect circuits, the arising non-linear equation generally poses the main challenge for a computationally cheap implementation. For any but the simplest circuits, using an iterative solver at execution time will be too slow, while exhaustive look-up tables quickly grow intolerably large. To better cope with the situation, in this paper we propose to store solutions non-uniformly sampled from the parameter space to enable an iterative solver to quickly converge when being started from the closest initial solution. Efficient look-up of this closest solution is realized by using a k-d tree. The method is supported by a step to reduce the dimension of the parameter space and a linear extrapolation from the closest solution stored to the actually needed parameter vector.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Wave digital filter modeling of circuits with operational amplifiers.\n \n \n \n \n\n\n \n Werner, K. J.; Dunkel, W. R.; Rest, M.; Olsen, M. J.; and Smith, J. O.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1033-1037, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"WavePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760405,\n  author = {K. J. Werner and W. R. Dunkel and M. Rest and M. J. Olsen and J. O. Smith},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Wave digital filter modeling of circuits with operational amplifiers},\n  year = {2016},\n  pages = {1033-1037},\n  abstract = {We extend the Wave Digital Filter (WDF) approach to simulate reference circuits that involve operational amplifiers (op-amps). We handle both nullor-based ideal op-amp models and controlled-source-based linear op-amp macromodels in circuits with arbitrary topologies using recent derivations for complicated scattering matrices. The presented methods greatly increase the class of appropriate circuits for virtual analog modeling, and readily extend to circuits with any number of op-amps. Although op-amps are essential to many circuits and deviations from ideal can be important, previous WDF research applies only to the limited case of circuits with ideal op-amps, in differential amplifier topology, with no global feedback.},\n  keywords = {active networks;differential amplifiers;matrix algebra;operational amplifiers;reference circuits;wave digital filters;differential amplifier topology;virtual analog modeling;complicated scattering matrices;controlled-source-based linear op-amp macromodel;nullor-based ideal op-amp model;reference circuit simulation;WDF approach;operational amplifier;wave digital filter circuit model;Integrated circuit modeling;Topology;Ports (Computers);Scattering;Europe;Digital filters},\n  doi = {10.1109/EUSIPCO.2016.7760405},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255463.pdf},\n}\n\n
\n
\n\n\n
\n We extend the Wave Digital Filter (WDF) approach to simulate reference circuits that involve operational amplifiers (op-amps). We handle both nullor-based ideal op-amp models and controlled-source-based linear op-amp macromodels in circuits with arbitrary topologies using recent derivations for complicated scattering matrices. The presented methods greatly increase the class of appropriate circuits for virtual analog modeling, and readily extend to circuits with any number of op-amps. Although op-amps are essential to many circuits and deviations from ideal can be important, previous WDF research applies only to the limited case of circuits with ideal op-amps, in differential amplifier topology, with no global feedback.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dynamic adaptation of instantaneous nonlinear bipoles in wave digital networks.\n \n \n \n \n\n\n \n Bernardini, A.; and Sarti, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1038-1042, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DynamicPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760406,\n  author = {A. Bernardini and A. Sarti},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Dynamic adaptation of instantaneous nonlinear bipoles in wave digital networks},\n  year = {2016},\n  pages = {1038-1042},\n  abstract = {Accommodating multiple nonlinearities in a modular fashion and with no use of delay elements is a relevant unsolved problem in the literature on Wave Digital (WD) networks. In this work we present a method for adapting instantaneous NonLinear (NL) bipoles characterized by monotonic increasing curves. The method relies on the fact that the characteristic of a NL bipole can be described by a line, which dynamically varies its slope and its intercept according to the actual operating point. This fact allows us to model a NL bipole as a linear real voltage generator with time-varying parameters. Dynamic adaptation makes possible to accommodate multiple nonlinearities in the same WD network, ensuring the absence of delay-free loops. We will show that diodes can be effectively modeled using the presented approach. As an example of application of our method, we present an implementation of a diode-based audio limiter.},\n  keywords = {audio signal processing;wave digital filters;instantaneous nonlinear bipole dynamic adaptation;wave digital network;WD network;NL bipole;linear real voltage generator;time-varying parameter;delay-free loop;diode-based audio limiter implementation;Ports (Computers);Mathematical model;Impedance;Adaptation models;Generators;Delays;Scattering},\n  doi = {10.1109/EUSIPCO.2016.7760406},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255799.pdf},\n}\n\n
\n
\n\n\n
\n Accommodating multiple nonlinearities in a modular fashion and with no use of delay elements is a relevant unsolved problem in the literature on Wave Digital (WD) networks. In this work we present a method for adapting instantaneous NonLinear (NL) bipoles characterized by monotonic increasing curves. The method relies on the fact that the characteristic of a NL bipole can be described by a line, which dynamically varies its slope and its intercept according to the actual operating point. This fact allows us to model a NL bipole as a linear real voltage generator with time-varying parameters. Dynamic adaptation makes possible to accommodate multiple nonlinearities in the same WD network, ensuring the absence of delay-free loops. We will show that diodes can be effectively modeled using the presented approach. As an example of application of our method, we present an implementation of a diode-based audio limiter.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Antialiased soft clipping using an integrated bandlimited ramp.\n \n \n \n \n\n\n \n Esqueda, F.; Välimäki, V.; and Bilbao, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1043-1047, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AntialiasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760407,\n  author = {F. Esqueda and V. Välimäki and S. Bilbao},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Antialiased soft clipping using an integrated bandlimited ramp},\n  year = {2016},\n  pages = {1043-1047},\n  abstract = {A new method for aliasing reduction in soft-clipping nonlinearities is proposed. Digital implementations of saturating systems introduce harmonic distortion which, if untreated, gets reflected at the Nyquist limit and is mixed with the signal. This is called aliasing and is heard as a disturbance. A new correction function, derived by integrating the bandlimited ramp function, is presented. This function reduces the level of aliasing distortion seen at the output of soft clippers by quasi-bandlimiting the discontinuities introduced in the second derivative of the signal. The proposed method increases the quality of the signal by attenuating those aliased components that lie on the lower end of the spectrum, which are known to be perceptually important. The four-point version of the algorithm reduces aliasing at low frequencies by up to about 50 dB. This work extends our understanding of aliasing in nonlinear systems and provides a new tool for its suppression in virtual analog models.},\n  keywords = {audio signal processing;bandlimited signals;harmonic distortion;nonlinear systems;virtual analog models;nonlinear systems;aliased component attenuation;signal quality;aliasing distortion;bandlimited ramp function;correction function;Nyquist limit;harmonic distortion;soft-clipping nonlinearities;aliasing reduction;integrated bandlimited ramp;antialiased soft clipping;Europe;Harmonic analysis;Distortion;Audio systems;Signal processing algorithms;Mixers},\n  doi = {10.1109/EUSIPCO.2016.7760407},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256244.pdf},\n}\n\n
\n
\n\n\n
\n A new method for aliasing reduction in soft-clipping nonlinearities is proposed. Digital implementations of saturating systems introduce harmonic distortion which, if untreated, gets reflected at the Nyquist limit and is mixed with the signal. This is called aliasing and is heard as a disturbance. A new correction function, derived by integrating the bandlimited ramp function, is presented. This function reduces the level of aliasing distortion seen at the output of soft clippers by quasi-bandlimiting the discontinuities introduced in the second derivative of the signal. The proposed method increases the quality of the signal by attenuating those aliased components that lie on the lower end of the spectrum, which are known to be perceptually important. The four-point version of the algorithm reduces aliasing at low frequencies by up to about 50 dB. This work extends our understanding of aliasing in nonlinear systems and provides a new tool for its suppression in virtual analog models.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Post-correlation signal analysis to detect spoofing attacks in GNSS receivers.\n \n \n \n \n\n\n \n Falletti, E.; Motella, B.; and Gamba, M. T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1048-1052, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Post-correlationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760408,\n  author = {E. Falletti and B. Motella and M. T. Gamba},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Post-correlation signal analysis to detect spoofing attacks in GNSS receivers},\n  year = {2016},\n  pages = {1048-1052},\n  abstract = {Due to the low level of the received power and to the known signal structure, Global Navigation Satellite Systems (GNSS) civil signals might be vulnerable to different sources of interference. Among them, the spoofing attacks are considered ones of the most deceptive, since their scope is controlling the output of the victim receiver. This paper presents a set of live experiments that validate the performance of a spoofing detection method, based on the Chi-square Goodness of Fit (GoF) test and applied post-correlation. Results are promising and show the GoF test capability to successfully warn the user in case of a spoofing attack.},\n  keywords = {radio receivers;radiofrequency interference;satellite navigation;signal detection;post-correlation signal analysis;spoofing attack detection;GNSS receiver;Global Navigation Satellite System receiver;interference source;Chi-square Goodness of Fit test;Chi-square GoF test;Receivers;Distortion;Correlators;Correlation;Global Positioning System;Delays;Satellites},\n  doi = {10.1109/EUSIPCO.2016.7760408},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255806.pdf},\n}\n\n
\n
\n\n\n
\n Due to the low level of the received power and to the known signal structure, Global Navigation Satellite Systems (GNSS) civil signals might be vulnerable to different sources of interference. Among them, the spoofing attacks are considered ones of the most deceptive, since their scope is controlling the output of the victim receiver. This paper presents a set of live experiments that validate the performance of a spoofing detection method, based on the Chi-square Goodness of Fit (GoF) test and applied post-correlation. Results are promising and show the GoF test capability to successfully warn the user in case of a spoofing attack.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Small cycle slip detection using singular spectrum analysis.\n \n \n \n \n\n\n \n Mazher, K.; and Tahir, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1053-1057, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SmallPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760409,\n  author = {K. Mazher and M. Tahir},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Small cycle slip detection using singular spectrum analysis},\n  year = {2016},\n  pages = {1053-1057},\n  abstract = {In Global navigation Satellite System (GNSS) based positioning, the use of carrier phase measurements is widening day by day due to their preciseness as compared to code delay measurements. Although, carrier phase measurements are precise but they suffer from anomalies such as cycle slips and receiver clock jumps in addition to other error sources such as satellite-user dynamics and atmospheric delays. On one end, the detection and exclusion of these anomalies is critical for accurate and reliable positioning. On the other end, it is very difficult to detect these anomalies especially in dynamic environment due to irregular user dynamics. We propose a novel algorithm for separating and localizing these anomalies from the satellite-user dynamics. The proposed approach is based on extracting the singular spectrum of windowed carrier phase measurements. An optimal choice of different parameters ensures that the extracted singular spectrum is affected only by anomalies such as cycle slips and is independent of satellite-user dynamics. Simulation results, supported by real GNSS data analysis, indicate improved accuracy and enhanced robustness against such anomalies with respect to traditional approach using optimal time differencing.},\n  keywords = {delays;phase measurement;satellite navigation;spectral analysis;small cycle slip detection;singular spectrum analysis;global navigation satellite system;GNSS;carrier phase measurement;code delay measurement;receiver clock jump;satellite-user dynamics;atmospheric delay;optimal time differencing;anomaly detection;Phase measurement;Receivers;Vehicle dynamics;Delays;Heuristic algorithms;Satellites;Clocks;GNSS;Singular spectrum;Anomaly detection;Cycle slips;Single frequency receiver},\n  doi = {10.1109/EUSIPCO.2016.7760409},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256149.pdf},\n}\n\n
\n
\n\n\n
\n In Global navigation Satellite System (GNSS) based positioning, the use of carrier phase measurements is widening day by day due to their preciseness as compared to code delay measurements. Although, carrier phase measurements are precise but they suffer from anomalies such as cycle slips and receiver clock jumps in addition to other error sources such as satellite-user dynamics and atmospheric delays. On one end, the detection and exclusion of these anomalies is critical for accurate and reliable positioning. On the other end, it is very difficult to detect these anomalies especially in dynamic environment due to irregular user dynamics. We propose a novel algorithm for separating and localizing these anomalies from the satellite-user dynamics. The proposed approach is based on extracting the singular spectrum of windowed carrier phase measurements. An optimal choice of different parameters ensures that the extracted singular spectrum is affected only by anomalies such as cycle slips and is independent of satellite-user dynamics. Simulation results, supported by real GNSS data analysis, indicate improved accuracy and enhanced robustness against such anomalies with respect to traditional approach using optimal time differencing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Testing general relativity using Galileo satellite signals.\n \n \n \n \n\n\n \n Giorgi, G.; Lülf, M.; Günther, C.; Herrmann, S.; Kunst, D.; Finke, F.; and Lämmerzahl, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1058-1062, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"TestingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760410,\n  author = {G. Giorgi and M. Lülf and C. Günther and S. Herrmann and D. Kunst and F. Finke and C. Lämmerzahl},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Testing general relativity using Galileo satellite signals},\n  year = {2016},\n  pages = {1058-1062},\n  abstract = {The two Galileo satellites launched in 2014 (E14 and E18) were injected in orbits with a significant eccentricity. Both the gravitational potential at the location of the satellites and their velocity thus change as a function of time. Since the Galileo satellites carry very stable clocks, these can potentially be used to set new bounds to the level of agreement between measurements of the clocks' frequency shifts and their prediction by the theory of relativity. This paper presents some initial results obtained by processing available data from Galileo satellite E18.},\n  keywords = {celestial mechanics;general relativity;gravitational red shift;gravitational waves;general relativity;eccentricity;gravitational potential;clock frequency shifts;Galileo satellite E18;gravitational redshift;Satellites;Clocks;Earth;Extraterrestrial measurements;Delays;Orbits;Phase measurement},\n  doi = {10.1109/EUSIPCO.2016.7760410},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256226.pdf},\n}\n\n
\n
\n\n\n
\n The two Galileo satellites launched in 2014 (E14 and E18) were injected in orbits with a significant eccentricity. Both the gravitational potential at the location of the satellites and their velocity thus change as a function of time. Since the Galileo satellites carry very stable clocks, these can potentially be used to set new bounds to the level of agreement between measurements of the clocks' frequency shifts and their prediction by the theory of relativity. This paper presents some initial results obtained by processing available data from Galileo satellite E18.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Real-time kinematic positioning with GPS and GLONASS.\n \n \n \n \n\n\n \n Henkel, P.; Mittmann, U.; and Iafrancesco, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1063-1067, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Real-timePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760411,\n  author = {P. Henkel and U. Mittmann and M. Iafrancesco},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Real-time kinematic positioning with GPS and GLONASS},\n  year = {2016},\n  pages = {1063-1067},\n  abstract = {Recently, low-cost dual constellation receivers with simultaneous tracking of GPS and GLONASS satellites and raw carrier phase output have become available. This enables a faster and more reliable ambiguity fixing especially in areas with limited satellite visibility, e.g. street canyons. In this paper, we present a combined GPS/GLONASS RTK positioning with code multipath estimation for low-cost receivers. We use double difference measurements and take the integer property of both the GPS and GLONASS ambiguities into account. For GLONASS, this requires a reparametrization of the double difference ambiguities and a subsequent parameter mapping. The re-parameterized measurement models are used for ambiguity fixing with real measurements from two low-cost dual constellation GNSS receivers. We obtain residuals of a few centimeters for both GPS and GLONASS fixed carrier phases.},\n  keywords = {Global Positioning System;satellite tracking;real-time kinematic positioning;GPS satellite tracking;GLONASS satellite tracking;low-cost dual constellation receiver;raw carrier phase output;ambiguity fixing;limited satellite visibility;GPS RTK positioning;GLONASS RTK positioning;code multipath estimation;GLONASS ambiguity;GPS ambiguity;integer property;double difference ambiguity reparametrization;parameter mapping;reparameterized measurement model;low-cost dual constellation GNSS receiver;Global Positioning System;Receivers;Satellites;Phase measurement;Position measurement;Europe;Signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760411},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256363.pdf},\n}\n\n
\n
\n\n\n
\n Recently, low-cost dual constellation receivers with simultaneous tracking of GPS and GLONASS satellites and raw carrier phase output have become available. This enables a faster and more reliable ambiguity fixing especially in areas with limited satellite visibility, e.g. street canyons. In this paper, we present a combined GPS/GLONASS RTK positioning with code multipath estimation for low-cost receivers. We use double difference measurements and take the integer property of both the GPS and GLONASS ambiguities into account. For GLONASS, this requires a reparametrization of the double difference ambiguities and a subsequent parameter mapping. The re-parameterized measurement models are used for ambiguity fixing with real measurements from two low-cost dual constellation GNSS receivers. We obtain residuals of a few centimeters for both GPS and GLONASS fixed carrier phases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Weighted shooting method for high-resolution DOA estimation based on sparse spectrum fitting.\n \n \n \n \n\n\n \n Ichige, K.; and Yotsueda, N.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1068-1072, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"WeightedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760412,\n  author = {K. Ichige and N. Yotsueda},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Weighted shooting method for high-resolution DOA estimation based on sparse spectrum fitting},\n  year = {2016},\n  pages = {1068-1072},\n  abstract = {This paper presents a weighted version of shooting method for high-resolution Direction Of Arrival (DOA) estimation based on Sparse Spectrum Fitting (SpSF). SpSF is one of the sparse DOA estimation methods based on ℓ1-regularization which uses the penalty term in optimization. The regularization process often takes long time and is very sensitive to the penalty term, but it is difficult to know its appropriate value in advance. We recall that the shooting method is a computationally efficient ℓ1-regularization algorithm but still sensitive to the penalty term. We try to modify the shooting method into the weighted ℓ1-regularization problem so that it does not become sensitive. Performance of the proposed method is evaluated in comparison with several conventional methods through some computer simulation..},\n  keywords = {direction-of-arrival estimation;weighted shooting method;high-resolution DOA estimation;sparse spectrum fitting;high-resolution direction of arrival estimation;SpSF;sparse DOA estimation methods;ℓ1-regularization;computer simulation;Direction-of-arrival estimation;Estimation;Signal to noise ratio;Optimization;Array signal processing;Fitting;direction of arrival estimation;array signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760412},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570258621.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a weighted version of shooting method for high-resolution Direction Of Arrival (DOA) estimation based on Sparse Spectrum Fitting (SpSF). SpSF is one of the sparse DOA estimation methods based on ℓ1-regularization which uses the penalty term in optimization. The regularization process often takes long time and is very sensitive to the penalty term, but it is difficult to know its appropriate value in advance. We recall that the shooting method is a computationally efficient ℓ1-regularization algorithm but still sensitive to the penalty term. We try to modify the shooting method into the weighted ℓ1-regularization problem so that it does not become sensitive. Performance of the proposed method is evaluated in comparison with several conventional methods through some computer simulation..\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Computational analysis of a fast algorithm for high-order sparse linear prediction.\n \n \n \n \n\n\n \n Jensen, T. L.; Giacobello, D.; van Waterschoot , T.; and Christensen, M. G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1073-1077, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ComputationalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760413,\n  author = {T. L. Jensen and D. Giacobello and T. {van Waterschoot} and M. G. Christensen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Computational analysis of a fast algorithm for high-order sparse linear prediction},\n  year = {2016},\n  pages = {1073-1077},\n  abstract = {Using a sparsity promoting convex penalty function on high-order linear prediction coefficients and residuals has shown to result in improved modeling of speech and other signals as this addresses the inherent limitations of standard linear prediction methods. However, this new formulation is computationally more demanding which may limit its use, in particular for embedded signal processing. This paper analyzes the algorithmic and computational aspects of the matrix structures associated with an alternating direction method of multipliers algorithm for solving the convex high-order sparse linear prediction problem. The paper also analyzes the inherent trade-off between accuracy and the objective measure of prediction gain and shows that a few iterations are sufficient to achieve similar results as computationally more expensive interior-point methods.},\n  keywords = {convex programming;iterative methods;linear programming;matrix algebra;prediction theory;speech processing;convex high-order sparse linear prediction problem;alternating direction method of multiplier algorithm;matrix structure algorithmic aspect;matrix structure computational aspect;embedded signal processing;standard linear prediction method;speech modeling;high-order linear prediction coefficient;convex penalty function;fast algorithm computational analysis;Signal processing algorithms;Prediction algorithms;Speech;Predictive models;Signal processing;IP networks;Linear systems;Sparse linear prediction;speech and audio processing;convex optimization;linear programming;embedded optimization},\n  doi = {10.1109/EUSIPCO.2016.7760413},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570254423.pdf},\n}\n\n
\n
\n\n\n
\n Using a sparsity promoting convex penalty function on high-order linear prediction coefficients and residuals has shown to result in improved modeling of speech and other signals as this addresses the inherent limitations of standard linear prediction methods. However, this new formulation is computationally more demanding which may limit its use, in particular for embedded signal processing. This paper analyzes the algorithmic and computational aspects of the matrix structures associated with an alternating direction method of multipliers algorithm for solving the convex high-order sparse linear prediction problem. The paper also analyzes the inherent trade-off between accuracy and the objective measure of prediction gain and shows that a few iterations are sufficient to achieve similar results as computationally more expensive interior-point methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Accelerated reconstruction of a compressively sampled data stream.\n \n \n \n \n\n\n \n Sopasakis, P.; Freris, N.; and Patrinos, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1078-1082, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AcceleratedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760414,\n  author = {P. Sopasakis and N. Freris and P. Patrinos},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Accelerated reconstruction of a compressively sampled data stream},\n  year = {2016},\n  pages = {1078-1082},\n  abstract = {The traditional compressed sensing approach is naturally offline, in that it amounts to sparsely sampling and reconstructing a given dataset. Recently, an online algorithm for performing compressed sensing on streaming data was proposed: the scheme uses recursive sampling of the input stream and recursive decompression to accurately estimate stream entries from the acquired noisy measurements. In this paper, we develop a novel Newton-type forward-backward proximal method to recursively solve the regularized Least-Squares problem (LASSO) online. We establish global convergence of our method as well as a local quadratic convergence rate. Our simulations show a substantial speed-up over the state of the art which may render the proposed method suitable for applications with stringent real-time constraints.},\n  keywords = {compressed sensing;data compression;least squares approximations;accelerated reconstruction;compressively sampled data stream;compressed sensing approach;noisy measurements;Newton-type forward- backward proximal method;least-squares problem;quadratic convergence rate;signal processing;Signal processing algorithms;Compressed sensing;Convergence;Newton method;Signal processing;Acceleration;Noise measurement;Compressed sensing;operator splitting methods;recursive algorithms;LASSO;Forward Backward Splitting},\n  doi = {10.1109/EUSIPCO.2016.7760414},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255196.pdf},\n}\n\n
\n
\n\n\n
\n The traditional compressed sensing approach is naturally offline, in that it amounts to sparsely sampling and reconstructing a given dataset. Recently, an online algorithm for performing compressed sensing on streaming data was proposed: the scheme uses recursive sampling of the input stream and recursive decompression to accurately estimate stream entries from the acquired noisy measurements. In this paper, we develop a novel Newton-type forward-backward proximal method to recursively solve the regularized Least-Squares problem (LASSO) online. We establish global convergence of our method as well as a local quadratic convergence rate. Our simulations show a substantial speed-up over the state of the art which may render the proposed method suitable for applications with stringent real-time constraints.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the duality of globally constrained separable problems and its application to distributed signal processing.\n \n \n \n \n\n\n \n Sherson, T.; Heusdens, R.; and Kleijn, W. B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1083-1087, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760415,\n  author = {T. Sherson and R. Heusdens and W. B. Kleijn},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On the duality of globally constrained separable problems and its application to distributed signal processing},\n  year = {2016},\n  pages = {1083-1087},\n  abstract = {In this paper, we focus on the challenge of processing data generated within decentralised wireless sensor networks in a distributed manner. When the desired operations can be expressed as globally constrained separable convex optimisation problems, we show how we can convert these to extended monotropic programs and exploit Lagrangian duality to form equivalent distributed consensus problems. Such problems can be embedded in sensor network applications via existing solvers such as the alternating direction method of multipliers or the primal dual method of multipliers. We then demonstrate how this approach can be used to solve specific problems including linearly constrained quadratic problems and the classic Gaussian channel capacity maximisation problem in a distributed manner.},\n  keywords = {channel capacity;convex programming;duality (mathematics);Gaussian channels;quadratic programming;signal processing;wireless sensor networks;distributed signal processing;decentralised wireless sensor network;globally constrained separable convex optimisation problem;extended monotropic program;Lagrangian duality;distributed consensus problem;linearly constrained quadratic problem;classic Gaussian channel capacity maximisation problem;Signal processing;Optimization;Wireless sensor networks;Channel capacity;Signal processing algorithms;Europe;Distributed databases;Wireless sensor networks;distributed signal processing;Lagrangian duality;extended monotropic programs},\n  doi = {10.1109/EUSIPCO.2016.7760415},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255703.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we focus on the challenge of processing data generated within decentralised wireless sensor networks in a distributed manner. When the desired operations can be expressed as globally constrained separable convex optimisation problems, we show how we can convert these to extended monotropic programs and exploit Lagrangian duality to form equivalent distributed consensus problems. Such problems can be embedded in sensor network applications via existing solvers such as the alternating direction method of multipliers or the primal dual method of multipliers. We then demonstrate how this approach can be used to solve specific problems including linearly constrained quadratic problems and the classic Gaussian channel capacity maximisation problem in a distributed manner.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Ad hoc microphone array beamforming using the primal-dual method of multipliers.\n \n \n \n \n\n\n \n Tavakoli, V. M.; Jensen, J. R.; Heusdens, R.; Benesty, J.; and Christensen, M. G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1088-1092, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AdPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760416,\n  author = {V. M. Tavakoli and J. R. Jensen and R. Heusdens and J. Benesty and M. G. Christensen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Ad hoc microphone array beamforming using the primal-dual method of multipliers},\n  year = {2016},\n  pages = {1088-1092},\n  abstract = {In the recent years, there have been an increasing amount of researches aiming at optimal beamforming with ad hoc microphone arrays, mostly fusion-center-based schemes. However, huge computational complexities and communication overheads impede many of these algorithms from being useful in practice. In this paper, we propose a low-footprint optimization approach to reduce the convergence time and overheads for the distributed beamforming problem. We transcribe the pseudo-coherence-based beamforming which is insightful for taking into account the nature of speech. We formulate the distributed minimum variance distortionless response beamformer using the primal-dual method of multipliers. Our experiments confirm the fast convergence using the proposed distributed algorithm. It is also shown how a hard limit on the number of iterations affects the performance of the array in noise and interference suppression.},\n  keywords = {array signal processing;computational complexity;convergence of numerical methods;interference suppression;iterative methods;microphone arrays;speech enhancement;ad hoc microphone array beamforming;primal-dual method of multipliers;optimal beamforming;fusion-center-based scheme;computational complexity;communication overhead;low-footprint optimization approach;convergence time reduction;distributed beamforming problem;pseudo-coherence-based beamforming;minimum variance distortionless response beamformer;iteration algorithm;noise suppression;interference suppression;array performance;speech enhancement;Array signal processing;Optimization;Microphone arrays;Ad hoc networks;Signal processing algorithms;Speech enhancement;ad hoc microphone array;distributed beamforming;primal-dual method of multipliers},\n  doi = {10.1109/EUSIPCO.2016.7760416},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256167.pdf},\n}\n\n
\n
\n\n\n
\n In the recent years, there have been an increasing amount of researches aiming at optimal beamforming with ad hoc microphone arrays, mostly fusion-center-based schemes. However, huge computational complexities and communication overheads impede many of these algorithms from being useful in practice. In this paper, we propose a low-footprint optimization approach to reduce the convergence time and overheads for the distributed beamforming problem. We transcribe the pseudo-coherence-based beamforming which is insightful for taking into account the nature of speech. We formulate the distributed minimum variance distortionless response beamformer using the primal-dual method of multipliers. Our experiments confirm the fast convergence using the proposed distributed algorithm. It is also shown how a hard limit on the number of iterations affects the performance of the array in noise and interference suppression.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-pitch estimation via fast group sparse learning.\n \n \n \n \n\n\n \n Kronvall, T.; Elvander, F.; Adalbjörnsson, S. I.; and Jakobsson, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1093-1097, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-pitchPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760417,\n  author = {T. Kronvall and F. Elvander and S. I. Adalbjörnsson and A. Jakobsson},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-pitch estimation via fast group sparse learning},\n  year = {2016},\n  pages = {1093-1097},\n  abstract = {In this work, we consider the problem of multi-pitch estimation using sparse heuristics and convex modeling. In general, this is a difficult non-linear optimization problem, as the frequencies belonging to one pitch often overlap the frequencies belonging to other pitches, thereby causing ambiguity between pitches with similar frequency content. The problem is further complicated by the fact that the number of pitches is typically not known. In this work, we propose a sparse modeling framework using a generalized chroma representation in order to remove redundancy and lower the dictionary's block-coherency. The found chroma estimates are then used to solve a small convex problem, whereby spectral smoothness is enforced, resulting in the corresponding pitch estimates. Compared with previously published sparse approaches, the resulting algorithm reduces the computational complexity of each iteration, as well as speeding up the overall convergence.},\n  keywords = {computational complexity;nonlinear programming;redundancy;spectral analysis;computational complexity reduction;spectral smoothness;block-coherency;redundancy removal;generalized chroma representation;nonlinear optimization problem;convex modeling;sparse heuristics;fast group sparse learning;multipitch estimation;Harmonic analysis;Estimation;Dictionaries;Tuning;Europe;Optimization;multi-pitch estimation;group lasso;data-adaptive dictionary;generalized chroma features},\n  doi = {10.1109/EUSIPCO.2016.7760417},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256352.pdf},\n}\n\n
\n
\n\n\n
\n In this work, we consider the problem of multi-pitch estimation using sparse heuristics and convex modeling. In general, this is a difficult non-linear optimization problem, as the frequencies belonging to one pitch often overlap the frequencies belonging to other pitches, thereby causing ambiguity between pitches with similar frequency content. The problem is further complicated by the fact that the number of pitches is typically not known. In this work, we propose a sparse modeling framework using a generalized chroma representation in order to remove redundancy and lower the dictionary's block-coherency. The found chroma estimates are then used to solve a small convex problem, whereby spectral smoothness is enforced, resulting in the corresponding pitch estimates. Compared with previously published sparse approaches, the resulting algorithm reduces the computational complexity of each iteration, as well as speeding up the overall convergence.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards dynamic classification completeness in Twitter.\n \n \n \n \n\n\n \n Milioris, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1098-1102, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760418,\n  author = {D. Milioris},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Towards dynamic classification completeness in Twitter},\n  year = {2016},\n  pages = {1098-1102},\n  abstract = {In this paper we study the application of Matrix Completion in topic detection and classification in Twitter. The proposed method first employs Joint Complexity to perform topic detection based on score matrices. Based on the spatial correlation of tweets and the spatial characteristics of the score matrices, we apply a novel framework which extends the Matrix Completion to build dynamically complete matrices from a small number of random sample Joint Complexity scores. The experimental evaluation with real data from Twitter presents the topic detection accuracy based on complete reconstructed matrices, and thus reducing the exhaustive computation of Joint Complexity scores.},\n  keywords = {social networking (online);joint complexity;topic classification;topic detection;matrix completion;Twitter;dynamic classification completeness;Twitter;Complexity theory;Correlation;Training;Europe;Signal processing;Markov processes},\n  doi = {10.1109/EUSIPCO.2016.7760418},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251442.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we study the application of Matrix Completion in topic detection and classification in Twitter. The proposed method first employs Joint Complexity to perform topic detection based on score matrices. Based on the spatial correlation of tweets and the spatial characteristics of the score matrices, we apply a novel framework which extends the Matrix Completion to build dynamically complete matrices from a small number of random sample Joint Complexity scores. The experimental evaluation with real data from Twitter presents the topic detection accuracy based on complete reconstructed matrices, and thus reducing the exhaustive computation of Joint Complexity scores.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Self-organizing binary encoding for Approximate Nearest Neighbor search.\n \n \n \n \n\n\n \n Ozan, E. C.; Kiranyaz, S.; Gabbouj, M.; and Hu, X.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1103-1107, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Self-organizingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760419,\n  author = {E. C. Ozan and S. Kiranyaz and M. Gabbouj and X. Hu},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Self-organizing binary encoding for Approximate Nearest Neighbor search},\n  year = {2016},\n  pages = {1103-1107},\n  abstract = {Approximate Nearest Neighbor (ANN) search for indexing and retrieval has become very popular with the recent growth of the databases in both size and dimension. In this paper, we propose a novel method for fast approximate distance calculation among the compressed samples. Inspiring from Kohonen's self-organizing maps, we propose a structured hierarchical quantization scheme in order to compress database samples in a more efficient way. Moreover, we introduce an error correction stage for encoding, which further improves the performance of the proposed method. The results on publicly available benchmark datasets demonstrate that the proposed method outperforms many well-known methods with comparable computational cost and storage space.},\n  keywords = {approximation theory;database management systems;indexing;pattern classification;query formulation;self-organising feature maps;self-organizing binary encoding;approximate nearest neighbor search;ANN search;indexing;information retrieval;databases;approximate distance calculation;Kohonen self-organizing maps;structured hierarchical quantization;Neurons;Quantization (signal);Encoding;Self-organizing feature maps;Optimization;Training;Approximate Nearest Neighbor search;Quantization;Binary Feature Synthesis;Vector Compression;Self-Organizing Maps},\n  doi = {10.1109/EUSIPCO.2016.7760419},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252189.pdf},\n}\n\n
\n
\n\n\n
\n Approximate Nearest Neighbor (ANN) search for indexing and retrieval has become very popular with the recent growth of the databases in both size and dimension. In this paper, we propose a novel method for fast approximate distance calculation among the compressed samples. Inspiring from Kohonen's self-organizing maps, we propose a structured hierarchical quantization scheme in order to compress database samples in a more efficient way. Moreover, we introduce an error correction stage for encoding, which further improves the performance of the proposed method. The results on publicly available benchmark datasets demonstrate that the proposed method outperforms many well-known methods with comparable computational cost and storage space.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 3D reconstruction quality analysis and its acceleration on GPU clusters.\n \n \n \n \n\n\n \n Polok, L.; Ila, V.; and Smrz, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1108-1112, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"3DPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760420,\n  author = {L. Polok and V. Ila and P. Smrz},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {3D reconstruction quality analysis and its acceleration on GPU clusters},\n  year = {2016},\n  pages = {1108-1112},\n  abstract = {3D reconstruction has a wide variety of applications in computer graphics, robotics or digital cinema production, among others. With the rapid increase in computing power, it has become more feasible for the reconstruction algorithms to run online, even on mobile devices. Maximum likelihood estimation (MLE) is the adopted technique to deal with the sensor uncertainty. Most of the existing 3D reconstruction frameworks only recover the mean of the reconstructed geometry. Recovering also the variance is highly computationally intensive and is seldom performed. However, variance is the natural choice of estimate quality indicator. In this paper, the associated costs are analyzed and efficient but exact solutions to calculating partial matrix inverses are proposed, which apply to any general problem with many mutually independent variables. Speedups exceeding an order of magnitude are reported.},\n  keywords = {graphics processing units;image reconstruction;matrix algebra;maximum likelihood estimation;3D reconstruction quality analysis;GPU cluster;graphics processing unit;maximum likelihood estimation;MLE;quality indicator estimation;matrix inverse;Cameras;Optimization;Three-dimensional displays;Maximum likelihood estimation;Sparse matrices;Covariance matrices;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760420},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255061.pdf},\n}\n\n
\n
\n\n\n
\n 3D reconstruction has a wide variety of applications in computer graphics, robotics or digital cinema production, among others. With the rapid increase in computing power, it has become more feasible for the reconstruction algorithms to run online, even on mobile devices. Maximum likelihood estimation (MLE) is the adopted technique to deal with the sensor uncertainty. Most of the existing 3D reconstruction frameworks only recover the mean of the reconstructed geometry. Recovering also the variance is highly computationally intensive and is seldom performed. However, variance is the natural choice of estimate quality indicator. In this paper, the associated costs are analyzed and efficient but exact solutions to calculating partial matrix inverses are proposed, which apply to any general problem with many mutually independent variables. Speedups exceeding an order of magnitude are reported.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A review of approximate methods for kernel-based big media data analysis.\n \n \n \n \n\n\n \n Iosifidis, A.; Tefas, A.; Pitas, I.; and Gabbouj, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1113-1117, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760421,\n  author = {A. Iosifidis and A. Tefas and I. Pitas and M. Gabbouj},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A review of approximate methods for kernel-based big media data analysis},\n  year = {2016},\n  pages = {1113-1117},\n  abstract = {With the increasing size of today's image and video data sets, standard pattern recognition approaches, like kernel based learning, need to face new challenges. Kernel-based methods require the storage and manipulation of the kernel matrix, having dimensions equal to the number of training samples. When the data set cardinality becomes large, the application of kernel methods becomes intractable. Approximate kernel-based learning approaches have been proposed in order to reduce the time and space complexities of kernel methods, while achieving satisfactory performance. In this paper, we provide a overview of such approximate kernel-based learning approaches finding application in media data analysis.},\n  keywords = {approximation theory;Big Data;computational complexity;image recognition;learning (artificial intelligence);matrix algebra;operating system kernels;video signal processing;approximation method;kernel-based big media data analysis;image data set;video data set;standard pattern recognition approach;kernel matrix manipulation;kernel matrix storage;kernel-based learning approach;time complexity reduction;space complexity reduction;Kernel;Matrix decomposition;Media;Standards;Training;Data analysis;Time complexity},\n  doi = {10.1109/EUSIPCO.2016.7760421},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255839.pdf},\n}\n\n
\n
\n\n\n
\n With the increasing size of today's image and video data sets, standard pattern recognition approaches, like kernel based learning, need to face new challenges. Kernel-based methods require the storage and manipulation of the kernel matrix, having dimensions equal to the number of training samples. When the data set cardinality becomes large, the application of kernel methods becomes intractable. Approximate kernel-based learning approaches have been proposed in order to reduce the time and space complexities of kernel methods, while achieving satisfactory performance. In this paper, we provide a overview of such approximate kernel-based learning approaches finding application in media data analysis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Digital music lab: A framework for analysing big music data.\n \n \n \n \n\n\n \n Abdallah, S.; Benetos, E.; Gold, N.; Hargreaves, S.; Weyde, T.; and Wolff, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1118-1122, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DigitalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760422,\n  author = {S. Abdallah and E. Benetos and N. Gold and S. Hargreaves and T. Weyde and D. Wolff},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Digital music lab: A framework for analysing big music data},\n  year = {2016},\n  pages = {1118-1122},\n  abstract = {In the transition from traditional to digital musicology, large scale music data are increasingly becoming available which require research methods that work on the collection level and at scale. In the Digital Music Lab (DML) project, a software system has been developed that provides large-scale analysis of music audio with an interactive interface. The DML system includes distributed processing of audio and other music data, remote analysis of copyright-restricted data, logical inference on the extracted information and metadata, and visual web-based interfaces for exploring and querying music collections. A system prototype has been set up in collaboration with the British Library and I Like Music Ltd, which has been used to analyse a diverse corpus of over 250,000 music recordings. In this paper we describe the system requirements, architecture, components, and data sources, explaining their interaction. Use cases and applications with initial evaluations of the proposed system are also reported.},\n  keywords = {Big Data;data analysis;inference mechanisms;interactive systems;meta data;music;query processing;software engineering;user interfaces;digital music lab;DML system;big music data analysis;software system development;interactive interface;music audio processing;logical inference;metadata;music collection querying;Feature extraction;Metadata;Histograms;Data mining;Libraries;Audio recording;Servers},\n  doi = {10.1109/EUSIPCO.2016.7760422},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255859.pdf},\n}\n\n
\n
\n\n\n
\n In the transition from traditional to digital musicology, large scale music data are increasingly becoming available which require research methods that work on the collection level and at scale. In the Digital Music Lab (DML) project, a software system has been developed that provides large-scale analysis of music audio with an interactive interface. The DML system includes distributed processing of audio and other music data, remote analysis of copyright-restricted data, logical inference on the extracted information and metadata, and visual web-based interfaces for exploring and querying music collections. A system prototype has been set up in collaboration with the British Library and I Like Music Ltd, which has been used to analyse a diverse corpus of over 250,000 music recordings. In this paper we describe the system requirements, architecture, components, and data sources, explaining their interaction. Use cases and applications with initial evaluations of the proposed system are also reported.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint estimation of late reverberant and speech power spectral densities in noisy environments using frobenius norm.\n \n \n \n \n\n\n \n Schwartz, O.; Gannot, S.; and Habets, E. A. P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1123-1127, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760423,\n  author = {O. Schwartz and S. Gannot and E. A. P. Habets},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint estimation of late reverberant and speech power spectral densities in noisy environments using frobenius norm},\n  year = {2016},\n  pages = {1123-1127},\n  abstract = {Various dereverberation and noise reduction algorithms require power spectral density estimates of the anechoic speech, reverberation, and noise. In this work, we derive a novel multichannel estimator for the power spectral densities (PSDs) of the reverberation and the speech suitable also for noisy environments. The speech and reverberation PSDs are estimated from all the entries of the received signals power spectral density (PSD) matrix. The Frobenius norm of a general error matrix is minimized to find the best fitting PSDs. Experimental results show that the proposed estimator provides accurate estimates of the PSDs, and is outperforming competing estimators. Moreover, when used in a multi-microphone noise reduction and dereverberation algorithm, the estimated reverberation and speech PSDs are shown to provide improved performance measures as compared with the competing estimators.},\n  keywords = {matrix algebra;speech processing;Frobenius norm;dereverberation algorithm;noise reduction algorithm;anechoic speech;power spectral density matrix;multimicrophone noise reduction;speech power spectral densities;Speech;Reverberation;Microphones;Noise measurement;Maximum likelihood estimation;Noise reduction},\n  doi = {10.1109/EUSIPCO.2016.7760423},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251927.pdf},\n}\n\n
\n
\n\n\n
\n Various dereverberation and noise reduction algorithms require power spectral density estimates of the anechoic speech, reverberation, and noise. In this work, we derive a novel multichannel estimator for the power spectral densities (PSDs) of the reverberation and the speech suitable also for noisy environments. The speech and reverberation PSDs are estimated from all the entries of the received signals power spectral density (PSD) matrix. The Frobenius norm of a general error matrix is minimized to find the best fitting PSDs. Experimental results show that the proposed estimator provides accurate estimates of the PSDs, and is outperforming competing estimators. Moreover, when used in a multi-microphone noise reduction and dereverberation algorithm, the estimated reverberation and speech PSDs are shown to provide improved performance measures as compared with the competing estimators.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n TUT database for acoustic scene classification and sound event detection.\n \n \n \n \n\n\n \n Mesaros, A.; Heittola, T.; and Virtanen, T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1128-1132, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"TUTPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760424,\n  author = {A. Mesaros and T. Heittola and T. Virtanen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {TUT database for acoustic scene classification and sound event detection},\n  year = {2016},\n  pages = {1128-1132},\n  abstract = {We introduce TUT Acoustic Scenes 2016 database for environmental sound research, consisting of binaural recordings from 15 different acoustic environments. A subset of this database, called TUT Sound Events 2016, contains annotations for individual sound events, specifically created for sound event detection. TUT Sound Events 2016 consists of residential area and home environments, and is manually annotated to mark onset, offset and label of sound events. In this paper we present the recording and annotation procedure, the database content, a recommended cross-validation setup and performance of supervised acoustic scene classification system and event detection baseline system using mel frequency cepstral coefficients and Gaussian mixture models. The database is publicly released to provide support for algorithm development and common ground for comparison of different techniques.},\n  keywords = {audio recording;audio signal processing;TUT database;acoustic scene classification;sound event detection;environmental sound research;binaural recordings;mel frequency cepstral coefficients;Gaussian mixture models;Event detection;Databases;Automobiles;Signal processing;Mel frequency cepstral coefficient;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760424},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251932.pdf},\n}\n\n
\n
\n\n\n
\n We introduce TUT Acoustic Scenes 2016 database for environmental sound research, consisting of binaural recordings from 15 different acoustic environments. A subset of this database, called TUT Sound Events 2016, contains annotations for individual sound events, specifically created for sound event detection. TUT Sound Events 2016 consists of residential area and home environments, and is manually annotated to mark onset, offset and label of sound events. In this paper we present the recording and annotation procedure, the database content, a recommended cross-validation setup and performance of supervised acoustic scene classification system and event detection baseline system using mel frequency cepstral coefficients and Gaussian mixture models. The database is publicly released to provide support for algorithm development and common ground for comparison of different techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Greek folk music classification using auditory cortical representations.\n \n \n \n \n\n\n \n Fotiadou, E.; Bassiou, N.; and Kotropoulos, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1133-1137, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"GreekPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760425,\n  author = {E. Fotiadou and N. Bassiou and C. Kotropoulos},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Greek folk music classification using auditory cortical representations},\n  year = {2016},\n  pages = {1133-1137},\n  abstract = {In this paper, we deal with the classification of Greek folk songs into 8 classes associated with the region of origin of the songs. Motivated by the way the sound is perceived by the human auditory system, auditory cortical representations are extracted from the music recordings. Moreover, deep canonical correlation analysis (DCCA) is applied to the auditory cortical representations for dimensionality reduction. To classify the music recordings, either support vector machines (SVMs) or classifiers based on canonical correlation are employed. An average classification rate of 73.25 % is measured on a dataset of Greek folk songs from 8 regions, when the auditory cortical representations are classified by the SVMs. It is also demonstrated that the reduced features extracted by the DCCA yield an encouraging average classification rate of 66.27%. The latter features are shown to possess good discriminating properties.},\n  keywords = {acoustic correlation;acoustic signal processing;feature extraction;signal classification;signal representation;support vector machines;Greek folk music classification;auditory cortical representation;Greek folk song classification;human auditory system;deep canonical correlation analysis;dimensionality reduction;support vector machine;SVM;canonical correlation;average classification rate;feature extraction;DCCA;Correlation;Time-frequency analysis;Auditory system;Feature extraction;Europe;Signal processing;Music},\n  doi = {10.1109/EUSIPCO.2016.7760425},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252053.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we deal with the classification of Greek folk songs into 8 classes associated with the region of origin of the songs. Motivated by the way the sound is perceived by the human auditory system, auditory cortical representations are extracted from the music recordings. Moreover, deep canonical correlation analysis (DCCA) is applied to the auditory cortical representations for dimensionality reduction. To classify the music recordings, either support vector machines (SVMs) or classifiers based on canonical correlation are employed. An average classification rate of 73.25 % is measured on a dataset of Greek folk songs from 8 regions, when the auditory cortical representations are classified by the SVMs. It is also demonstrated that the reduced features extracted by the DCCA yield an encouraging average classification rate of 66.27%. The latter features are shown to possess good discriminating properties.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Supervised nonnegative matrix factorization with Dual-Itakura-Saito and Kullback-Leibler divergences for music transcription.\n \n \n \n \n\n\n \n Kagami, H.; and Yukawa, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1138-1142, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SupervisedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760426,\n  author = {H. Kagami and M. Yukawa},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Supervised nonnegative matrix factorization with Dual-Itakura-Saito and Kullback-Leibler divergences for music transcription},\n  year = {2016},\n  pages = {1138-1142},\n  abstract = {In this paper, we present a convex-analytic approach to supervised nonnegative matrix factorization (SNMF) based on the Dual-Itakura-Saito (Dual-IS) and Kullback-Leibler (KL) divergences for music transcription. The Dual-IS and KL divergences define convex fidelity functions, whereas the IS divergence defines a nonconvex one. The SNMF problem is formulated as minimizing the divergence-based fidelity function penalized by the ℓ1 and row-block ℓ1 norms subject to the nonnegativity constraint. Simulation results show that (i) the use of the Dual-IS and KL divergences yields better performance than the squared Euclidean distance and that (ii) the use of the Dual-IS divergence prevents from false alarms efficiently.},\n  keywords = {convex programming;matrix decomposition;minimisation;music;supervised nonnegative matrix factorization;SNMF;Dual-Itakura-Saito divergence;Kullback-Leibler divergence;music transcription;convex-analytic approach;minimization;fidelity function penalization;Euclidean distance;Dictionaries;Europe;Signal processing;Optimization;Simulation;Music},\n  doi = {10.1109/EUSIPCO.2016.7760426},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252086.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present a convex-analytic approach to supervised nonnegative matrix factorization (SNMF) based on the Dual-Itakura-Saito (Dual-IS) and Kullback-Leibler (KL) divergences for music transcription. The Dual-IS and KL divergences define convex fidelity functions, whereas the IS divergence defines a nonconvex one. The SNMF problem is formulated as minimizing the divergence-based fidelity function penalized by the ℓ1 and row-block ℓ1 norms subject to the nonnegativity constraint. Simulation results show that (i) the use of the Dual-IS and KL divergences yields better performance than the squared Euclidean distance and that (ii) the use of the Dual-IS divergence prevents from false alarms efficiently.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Music signal separation using supervised NMF with all-pole-model-based discriminative basis deformation.\n \n \n \n \n\n\n \n Nakajima, H.; Kitamura, D.; Takamune, N.; Koyama, S.; Saruwatari, H.; Ono, N.; Takahashi, Y.; and Kondo, K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1143-1147, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MusicPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760427,\n  author = {H. Nakajima and D. Kitamura and N. Takamune and S. Koyama and H. Saruwatari and N. Ono and Y. Takahashi and K. Kondo},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Music signal separation using supervised NMF with all-pole-model-based discriminative basis deformation},\n  year = {2016},\n  pages = {1143-1147},\n  abstract = {In this paper, we address the music signal separation problem and propose a new supervised nonnegative matrix factorization (SNMF) algorithm employing the deformation of a spectral supervision basis trained in advance. Conventional SNMF has a problem that the separation accuracy is degraded by a mismatch between the trained basis and the spectrogram of the actual target sound in open data. To reduce the mismatch problem, we propose a new method with two features. First, we introduce a deformation with an all-pole model that is optimized to make the trained basis fit the spectrogram of the target signal, even if the true target component is hidden in the observed mixture. Next, to avoid an excess deformation, we limit the degree of freedom in the deformation by performing discriminative training. Our experimental evaluation reveals that the proposed method outperforms conventional SNMFs.},\n  keywords = {matrix decomposition;signal representation;source separation;music signal separation;supervised nonnegative matrix factorization algorithm;supervised NMF;all-pole-model-based discriminative basis deformation;spectral supervision;target signal spectrogram;Matrix decomposition;Spectrogram;Training;Source separation;Deformable models;Cost function},\n  doi = {10.1109/EUSIPCO.2016.7760427},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252089.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we address the music signal separation problem and propose a new supervised nonnegative matrix factorization (SNMF) algorithm employing the deformation of a spectral supervision basis trained in advance. Conventional SNMF has a problem that the separation accuracy is degraded by a mismatch between the trained basis and the spectrogram of the actual target sound in open data. To reduce the mismatch problem, we propose a new method with two features. First, we introduce a deformation with an all-pole model that is optimized to make the trained basis fit the spectrogram of the target signal, even if the true target component is hidden in the observed mixture. Next, to avoid an excess deformation, we limit the degree of freedom in the deformation by performing discriminative training. Our experimental evaluation reveals that the proposed method outperforms conventional SNMFs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Envelope analysis methods for tonality estimation.\n \n \n \n \n\n\n \n Taghipour, A.; Desikan, B.; and Edler, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1148-1152, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EnvelopePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760428,\n  author = {A. Taghipour and B. Desikan and B. Edler},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Envelope analysis methods for tonality estimation},\n  year = {2016},\n  pages = {1148-1152},\n  abstract = {In perceptual audio coders, the audio signal masks the quantization noise. The masking effectiveness depends on the degree of tonality/noisiness of the signal. Hence, in psychoacoustic models (PM) of perceptual coders, the level of the estimated masking thresholds can be adjusted by tonality estimation methods. This paper introduces three envelope analysis methods for tonality estimation: optimized amplitude modulation ratio (AM-R), auditory image correlation, and temporal envelope rate. The methods were implemented in a filter bank-based PM. In a subjective quality test, they were compared to each other and to another existing method, partial spectral flatness measure (PSFM). The PSFM and the AM-R were rated significantly higher than the other methods.},\n  keywords = {acoustic correlation;acoustic noise;audio coding;channel bank filters;image filtering;quantisation (signal);tonality estimation;envelope analysis method;perceptual audio coder;audio signal mask;quantization noise;psychoacoustic model;amplitude modulation ratio;auditory image correlation;temporal envelope rate;filter bank-based PM;partial spectral flatness measure;PSFM;AM-R;Estimation;Bandwidth;Correlation;Psychoacoustic models;Modulation;Filter banks;Discrete Fourier transforms;Perceptual Model;Psychoacoustic Model;Perceptual Audio Coding;Tonality Estimation},\n  doi = {10.1109/EUSIPCO.2016.7760428},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252108.pdf},\n}\n\n
\n
\n\n\n
\n In perceptual audio coders, the audio signal masks the quantization noise. The masking effectiveness depends on the degree of tonality/noisiness of the signal. Hence, in psychoacoustic models (PM) of perceptual coders, the level of the estimated masking thresholds can be adjusted by tonality estimation methods. This paper introduces three envelope analysis methods for tonality estimation: optimized amplitude modulation ratio (AM-R), auditory image correlation, and temporal envelope rate. The methods were implemented in a filter bank-based PM. In a subjective quality test, they were compared to each other and to another existing method, partial spectral flatness measure (PSFM). The PSFM and the AM-R were rated significantly higher than the other methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Complex angular central Gaussian mixture model for directional statistics in mask-based microphone array signal processing.\n \n \n \n \n\n\n \n Ito, N.; Araki, S.; and Nakatani, T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1153-1157, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ComplexPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760429,\n  author = {N. Ito and S. Araki and T. Nakatani},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Complex angular central Gaussian mixture model for directional statistics in mask-based microphone array signal processing},\n  year = {2016},\n  pages = {1153-1157},\n  abstract = {Microphone array signal processing based on time-frequency masks has been applied successfully to various tasks including source separation, denoising, source localization, and source counting. Aiming to improve the performance of these techniques, here we propose a mask estimation method based on a complex Angular Central Gaussian Mixture Model (cACGMM) for multichannel observed signals. Compared to a conventional complex Watson Mixture Model (cWMM), the proposed cACGMM can model not only rotationally symmetrical but also elliptical distributions. Therefore, the cACGMM can better approximate the distribution of observed data, which is generally not rotationally symmetrical. In source separation simulations with real recorded impulse responses, the cACGMM resulted in an average 1.2 dB improvement of the Signal-to-Distortion Ratio (SDR) over the cWMM.},\n  keywords = {array signal processing;audio signal processing;Gaussian distribution;Gaussian processes;microphone arrays;mixture models;signal denoising;source separation;time-frequency analysis;transient response;complex angular central Gaussian mixture model;directional statistics;mask-based microphone array signal processing;time-frequency masks;source separation;signal denoising;source localization;source counting;mask estimation method;impulse response;cACGMM;signal-to-distortion ratio;SDR;complex Watson mixture model;multichannel observed signals;elliptical distribution;Source separation;Microphones;Time-frequency analysis;Mixture models;Array signal processing;Gaussian mixture model;Time-frequency masks;microphone array signal processing;complex angular central Gaussian distributions},\n  doi = {10.1109/EUSIPCO.2016.7760429},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256519.pdf},\n}\n\n
\n
\n\n\n
\n Microphone array signal processing based on time-frequency masks has been applied successfully to various tasks including source separation, denoising, source localization, and source counting. Aiming to improve the performance of these techniques, here we propose a mask estimation method based on a complex Angular Central Gaussian Mixture Model (cACGMM) for multichannel observed signals. Compared to a conventional complex Watson Mixture Model (cWMM), the proposed cACGMM can model not only rotationally symmetrical but also elliptical distributions. Therefore, the cACGMM can better approximate the distribution of observed data, which is generally not rotationally symmetrical. In source separation simulations with real recorded impulse responses, the cACGMM resulted in an average 1.2 dB improvement of the Signal-to-Distortion Ratio (SDR) over the cWMM.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Audio affect burst synthesis: A multilevel synthesis system for emotional expressions.\n \n \n \n \n\n\n \n El Haddad, K.; Çakmak, H.; Sulír, M.; Dupont, S.; and Dutoit, T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1158-1162, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AudioPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760430,\n  author = {K. {El Haddad} and H. Çakmak and M. Sulír and S. Dupont and T. Dutoit},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Audio affect burst synthesis: A multilevel synthesis system for emotional expressions},\n  year = {2016},\n  pages = {1158-1162},\n  abstract = {Affect bursts are short, isolated and non-verbal expressions of affect expressed vocally or facially. In this paper we present an attempt at synthesizing audio affect bursts on several levels of arousal. This work concerns 3 different types of affect bursts: disgust, startle and surprised expressions. Data are first gathered for each of these affect bursts at two different levels of arousal each. Then, each level of each emotion is modeled using Hidden Markov Models. A weighted linear interpolation technique is then used to obtain intermediate levels from these models. The obtained synthesized affect bursts are then evaluated in a perception test.},\n  keywords = {hidden Markov models;interpolation;speech synthesis;audio affect burst synthesis;multilevel synthesis system;emotional expressions;isolated nonverbal expression;vocal expression;facial expression;arousal level;disgust expression;startle expression;surprised expression;emotion modeling;hidden Markov models;weighted linear interpolation technique;Hidden Markov models;Interpolation;Databases;Europe;Signal processing;Heating;Trajectory},\n  doi = {10.1109/EUSIPCO.2016.7760430},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256483.pdf},\n}\n\n
\n
\n\n\n
\n Affect bursts are short, isolated and non-verbal expressions of affect expressed vocally or facially. In this paper we present an attempt at synthesizing audio affect bursts on several levels of arousal. This work concerns 3 different types of affect bursts: disgust, startle and surprised expressions. Data are first gathered for each of these affect bursts at two different levels of arousal each. Then, each level of each emotion is modeled using Hidden Markov Models. A weighted linear interpolation technique is then used to obtain intermediate levels from these models. The obtained synthesized affect bursts are then evaluated in a perception test.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Classification of HEp-2 cells using distributed dictionary learning.\n \n \n \n \n\n\n \n Monajemi, S.; Ensafi, S.; Lu, S.; Kassim, A. A.; Tan, C. L.; Sanei, S.; and Ong, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1163-1167, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ClassificationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760431,\n  author = {S. Monajemi and S. Ensafi and S. Lu and A. A. Kassim and C. L. Tan and S. Sanei and S. Ong},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Classification of HEp-2 cells using distributed dictionary learning},\n  year = {2016},\n  pages = {1163-1167},\n  abstract = {Automatic classification of human epithelial type-2 (HEp-2) cells can improve the diagnostic process of autoimmune diseases (ADs) in terms of lower cost, faster response, and better repeatability. However, most of the proposed methods for classification of HEp-2 cells suffer from several constraints including tedious parameter tuning, massive memory requirement, and high computational costs. We propose an adaptive distributed dictionary learning (ADDL) method where the dictionary learning problem is reformulated as a distributed learning task. With the help of this approach, we develop an automatic and robust method that effectively handles the complexity of the problem in terms of memory and computational cost and also obtains superior classification accuracy.},\n  keywords = {cellular biophysics;diseases;learning (artificial intelligence);medical diagnostic computing;patient diagnosis;HEp-2 cell classification;automatic classification;human epithelial type-2 cells;diagnostic process;autoimmune diseases;adaptive distributed dictionary learning method;ADDL;dictionary learning problem;classification accuracy;Dictionaries;Feature extraction;Cost function;Europe;Signal processing;Memory management},\n  doi = {10.1109/EUSIPCO.2016.7760431},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255602.pdf},\n}\n\n
\n
\n\n\n
\n Automatic classification of human epithelial type-2 (HEp-2) cells can improve the diagnostic process of autoimmune diseases (ADs) in terms of lower cost, faster response, and better repeatability. However, most of the proposed methods for classification of HEp-2 cells suffer from several constraints including tedious parameter tuning, massive memory requirement, and high computational costs. We propose an adaptive distributed dictionary learning (ADDL) method where the dictionary learning problem is reformulated as a distributed learning task. With the help of this approach, we develop an automatic and robust method that effectively handles the complexity of the problem in terms of memory and computational cost and also obtains superior classification accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Secure beamforming and artificial noise design for simultaneous wireless information and power transfer in interference networks.\n \n \n \n \n\n\n \n Wang, W.; Cao, R.; Gao, H.; Zhou, J.; and Lv, T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1168-1172, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SecurePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760432,\n  author = {W. Wang and R. Cao and H. Gao and J. Zhou and T. Lv},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Secure beamforming and artificial noise design for simultaneous wireless information and power transfer in interference networks},\n  year = {2016},\n  pages = {1168-1172},\n  abstract = {We study a secure communication of a two-user interference network. In this network, two transmitters send information and artificial noise simultaneously to two users and energy receivers (ERs), and ERs are likely to perform as eavesdroppers. We propose a joint beamforming and artificial noise design to maximize the total energy harvested by ERs subject to the secrecy sum rate requirement and power constraints. The proposed design constitutes an optimization problem, which is non-convex yet can be further transformed into a two-stage problem. In the first stage, by introducing an auxiliary variable, we can reformulate the non-convex as a second-order cone program (SOCP) problem, and the constrained concave convex procedure based algorithm is proposed to make this SOCP problem tractable. In the second stage, the auxiliary variable is obtained by the one-dimensional line search. Numerical results demonstrate the effectiveness of the proposed algorithm.},\n  keywords = {array signal processing;inductive power transmission;interference (signal);optimisation;telecommunication security;secure beamforming;artificial noise design;simultaneous wireless information;power transfer;interference networks;secure communication;energy receivers;artificial noise;optimization problem;second-order cone program;concave convex procedure based algorithm;SOCP problem;Erbium;Interference;Array signal processing;Receivers;Transmitters;Wireless communication;Signal processing algorithms},\n  doi = {10.1109/EUSIPCO.2016.7760432},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250620.pdf},\n}\n\n
\n
\n\n\n
\n We study a secure communication of a two-user interference network. In this network, two transmitters send information and artificial noise simultaneously to two users and energy receivers (ERs), and ERs are likely to perform as eavesdroppers. We propose a joint beamforming and artificial noise design to maximize the total energy harvested by ERs subject to the secrecy sum rate requirement and power constraints. The proposed design constitutes an optimization problem, which is non-convex yet can be further transformed into a two-stage problem. In the first stage, by introducing an auxiliary variable, we can reformulate the non-convex as a second-order cone program (SOCP) problem, and the constrained concave convex procedure based algorithm is proposed to make this SOCP problem tractable. In the second stage, the auxiliary variable is obtained by the one-dimensional line search. Numerical results demonstrate the effectiveness of the proposed algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improving physical layer security in DF relay networks via two-stage cooperative jamming.\n \n \n \n \n\n\n \n Kolokotronis, N.; and Athanasakos, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1173-1177, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760433,\n  author = {N. Kolokotronis and M. Athanasakos},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Improving physical layer security in DF relay networks via two-stage cooperative jamming},\n  year = {2016},\n  pages = {1173-1177},\n  abstract = {The design of a cooperative protocol relying on both cooperative relaying and jamming in order to provide security at the physical layer of wireless communications is considered in this paper. We suppose that pair of nodes is assisted by a number of helpers in their communication, which either relay information or cause harmful interference to an eavesdropper, at both stages of the relaying protocol. Instead of maximizing the secrecy capacity, a signal-to-noise ratio based approach is taken. Solutions for the optimal weights used at each protocol and stage are sought along with the optimal power distribution. To solve this problem, tools from semi-definite and geometric programming are utilized, and an iterative algorithm is proposed. Simulations show noticeable gains (up to 50dB) compared to the non-cooperative case.},\n  keywords = {cooperative communication;decode and forward communication;geometric programming;iterative methods;jamming;protocols;relay networks (telecommunication);telecommunication security;physical layer security improvement;DF relay network;two-stage cooperative jamming;wireless communication;cooperative relaying protocol;secrecy capacity maximization;signal-to-noise ratio;optimal power distribution;semidefinite programming;geometric programming;iterative algorithm;decode-and-forward system;Jamming;Relays;Protocols;Signal to noise ratio;Europe;Security;Interference;Physical layer security;cooperative transmission protocols;cooperative jamming;optimization;wireless networks},\n  doi = {10.1109/EUSIPCO.2016.7760433},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252348.pdf},\n}\n\n
\n
\n\n\n
\n The design of a cooperative protocol relying on both cooperative relaying and jamming in order to provide security at the physical layer of wireless communications is considered in this paper. We suppose that pair of nodes is assisted by a number of helpers in their communication, which either relay information or cause harmful interference to an eavesdropper, at both stages of the relaying protocol. Instead of maximizing the secrecy capacity, a signal-to-noise ratio based approach is taken. Solutions for the optimal weights used at each protocol and stage are sought along with the optimal power distribution. To solve this problem, tools from semi-definite and geometric programming are utilized, and an iterative algorithm is proposed. Simulations show noticeable gains (up to 50dB) compared to the non-cooperative case.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reversible watermarking based on complementary predictors and context embedding.\n \n \n \n \n\n\n \n Dragoi, I.; and Coltuc, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1178-1182, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ReversiblePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760434,\n  author = {I. Dragoi and D. Coltuc},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Reversible watermarking based on complementary predictors and context embedding},\n  year = {2016},\n  pages = {1178-1182},\n  abstract = {Two complementary bounds of the predicted pixel are defined and used to compute two estimates of the prediction error. The more suitable value for reversible watermarking among the two estimates is selected for data insertion. A reversible watermarking scheme based on context embedding ensures the detection of the selected value, without the need for any additional information. The scheme is general and works regardless the particular predictor. The proposed scheme is of interest for embedding bit-rates of less than 0.5 bpp. Interesting results are reported for the case of pairwise embedding reversible watermarking. The proposed scheme compares very well with the most efficient schemes published so far.},\n  keywords = {image watermarking;reversible watermarking;context embedding;prediction error;data insertion;Context;Watermarking;Histograms;Distortion;Europe;Two dimensional displays},\n  doi = {10.1109/EUSIPCO.2016.7760434},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255673.pdf},\n}\n\n
\n
\n\n\n
\n Two complementary bounds of the predicted pixel are defined and used to compute two estimates of the prediction error. The more suitable value for reversible watermarking among the two estimates is selected for data insertion. A reversible watermarking scheme based on context embedding ensures the detection of the selected value, without the need for any additional information. The scheme is general and works regardless the particular predictor. The proposed scheme is of interest for embedding bit-rates of less than 0.5 bpp. Interesting results are reported for the case of pairwise embedding reversible watermarking. The proposed scheme compares very well with the most efficient schemes published so far.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Camera model identification based machine learning approach with high order statistics features.\n \n \n \n \n\n\n \n Tuama, A.; Comby, F.; and Chaumont, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1183-1187, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CameraPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760435,\n  author = {A. Tuama and F. Comby and M. Chaumont},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Camera model identification based machine learning approach with high order statistics features},\n  year = {2016},\n  pages = {1183-1187},\n  abstract = {Source camera identification methods aim at identifying the camera used to capture an image. In this paper we developed a method for digital camera model identification by extracting three sets of features in a machine learning scheme. These features are the co-occurrences matrix, some features related to CFA interpolation arrangement, and conditional probability statistics. These features give high order statistics which supplement and enhance the identification rate. The method is implemented with 14 camera models from Dresden database with multi class SVM classifier. A comparison is performed between our method and a camera fingerprint correlation-based method which only depends on PRNU extraction. The experiments prove the strength of our proposition since it achieves higher accuracy than the correlation-based method.},\n  keywords = {cameras;feature extraction;higher order statistics;image capture;learning (artificial intelligence);optical filters;support vector machines;machine learning;high order statistics features;source camera identification methods;image capture;digital camera model identification;feature extraction;color filter array;conditional probability statistics;Dresden database;multiclass SVM classifier;Cameras;Feature extraction;Image color analysis;Correlation;Colored noise;Interpolation;Discrete cosine transforms;Camera identification;Co-occurrences;CFA interpolation;Conditional Probability;SVM},\n  doi = {10.1109/EUSIPCO.2016.7760435},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256116.pdf},\n}\n\n
\n
\n\n\n
\n Source camera identification methods aim at identifying the camera used to capture an image. In this paper we developed a method for digital camera model identification by extracting three sets of features in a machine learning scheme. These features are the co-occurrences matrix, some features related to CFA interpolation arrangement, and conditional probability statistics. These features give high order statistics which supplement and enhance the identification rate. The method is implemented with 14 camera models from Dresden database with multi class SVM classifier. A comparison is performed between our method and a camera fingerprint correlation-based method which only depends on PRNU extraction. The experiments prove the strength of our proposition since it achieves higher accuracy than the correlation-based method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimation of the spatial information in Gaussian model based audio source separation using weighted spectral bases.\n \n \n \n \n\n\n \n Fakhry, M.; Svaizer, P.; and Omologo, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1188-1192, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EstimationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760436,\n  author = {M. Fakhry and P. Svaizer and M. Omologo},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Estimation of the spatial information in Gaussian model based audio source separation using weighted spectral bases},\n  year = {2016},\n  pages = {1188-1192},\n  abstract = {In Gaussian model based audio source separation, source spatial images are modeled by Gaussian distributions. The covariance matrices of the distributions are represented by source variances and spatial covariance matrices. Accordingly, the likelihood of observed mixtures of independent source signals is parametrized by the variances and the covariance matrices. The separation is performed by estimating the parameters and applying multichannel Wiener filtering. Assuming that spectral basis matrices trained on source power spectra are available, this work proposes a method to estimate the parameters by maximizing the likelihood using Expectation-Maximization. In terms of normalization, the variances are estimated applying singular value decomposition. Furthermore, by building weighted matrices from vectors of the trained matrices, semi-supervised nonnegative matrix factorization is applied to estimate the spatial covariance matrices. The experimental results prove the efficiency of the proposed algorithm in reverberant environments.},\n  keywords = {audio signal processing;covariance matrices;expectation-maximisation algorithm;filtering theory;Gaussian distribution;matrix decomposition;source separation;spatial information estimation;Gaussian model based audio source separation;weighted spectral bases;source spatial images;Gaussian distributions;source variances;spatial covariance matrices;parameter estimation;multichannel Wiener filtering;spectral basis matrices;source power spectra;expectation-maximization;semisupervised nonnegative matrix factorization;Covariance matrices;Time-frequency analysis;Matrix decomposition;Source separation;Estimation;Signal processing algorithms;Minimization;Spectral bases;nonnegative matrix factorization;spatial covariance matrix;audio source separation},\n  doi = {10.1109/EUSIPCO.2016.7760436},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250685.pdf},\n}\n\n
\n
\n\n\n
\n In Gaussian model based audio source separation, source spatial images are modeled by Gaussian distributions. The covariance matrices of the distributions are represented by source variances and spatial covariance matrices. Accordingly, the likelihood of observed mixtures of independent source signals is parametrized by the variances and the covariance matrices. The separation is performed by estimating the parameters and applying multichannel Wiener filtering. Assuming that spectral basis matrices trained on source power spectra are available, this work proposes a method to estimate the parameters by maximizing the likelihood using Expectation-Maximization. In terms of normalization, the variances are estimated applying singular value decomposition. Furthermore, by building weighted matrices from vectors of the trained matrices, semi-supervised nonnegative matrix factorization is applied to estimate the spatial covariance matrices. The experimental results prove the efficiency of the proposed algorithm in reverberant environments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Active speech level estimation in noisy signals with quadrature noise suppression.\n \n \n \n \n\n\n \n Dionelis, N.; and Brookes, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1193-1197, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ActivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760437,\n  author = {N. Dionelis and M. Brookes},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Active speech level estimation in noisy signals with quadrature noise suppression},\n  year = {2016},\n  pages = {1193-1197},\n  abstract = {We present a noise-robust algorithm for estimating the active level of speech, which is the average speech power during intervals of speech activity. The proposed algorithm uses the clean speech phase to remove the quadrature noise component from the short-time power spectrum of the noisy speech, as well as SNR-dependent techniques to improve the estimation. The pitch of voiced speech frames is determined using a noise-robust pitch tracker and the speech level is estimated from the energy of the pitch harmonics using the harmonic summation principle. At low noise levels, the resultant active speech level estimate is combined with that from the standardized ITU-T P.56 algorithm to give a final composite estimate. The algorithm has been evaluated using a range of noise signals and gives consistently lower errors than previous methods and than the ITU-T P.56 algorithm, which is accurate for SNR levels of above 15 dB.},\n  keywords = {speech synthesis;active speech level estimation;noisy signals;quadrature noise suppression;noise-robust algorithm;speech power;quadrature noise component;short-time power spectrum;SNR-dependent techniques;noise-robust pitch tracker;standardized ITU-T P.56 algorithm;composite estimate;Speech;Signal to noise ratio;Signal processing algorithms;Harmonic analysis;Noise measurement;Algorithm design and analysis;Power system harmonics;Speech analysis;active speech level;harmonic summation},\n  doi = {10.1109/EUSIPCO.2016.7760437},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250881.pdf},\n}\n\n
\n
\n\n\n
\n We present a noise-robust algorithm for estimating the active level of speech, which is the average speech power during intervals of speech activity. The proposed algorithm uses the clean speech phase to remove the quadrature noise component from the short-time power spectrum of the noisy speech, as well as SNR-dependent techniques to improve the estimation. The pitch of voiced speech frames is determined using a noise-robust pitch tracker and the speech level is estimated from the energy of the pitch harmonics using the harmonic summation principle. At low noise levels, the resultant active speech level estimate is combined with that from the standardized ITU-T P.56 algorithm to give a final composite estimate. The algorithm has been evaluated using a range of noise signals and gives consistently lower errors than previous methods and than the ITU-T P.56 algorithm, which is accurate for SNR levels of above 15 dB.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Median filtering the temporal probability distribution in histogram mapping for robust continuous speech recognition.\n \n \n \n \n\n\n \n Gordillo, C. A.; de Marca , J. R. B.; and Alcaim, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1198-1201, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MedianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760438,\n  author = {C. A. Gordillo and J. R. B. {de Marca} and A. Alcaim},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Median filtering the temporal probability distribution in histogram mapping for robust continuous speech recognition},\n  year = {2016},\n  pages = {1198-1201},\n  abstract = {The nonlinear distortion in the cepstral coefficients domain introduced by additive noise in the speech signal, results in high degradation performance in systems of Automatic Speech Recognition (ASR). For this reason, we propose a median filter which smooths the probability distribution functions of degraded features, thus reducing the mismatch between training data and test. The new proposal uses a histogram mapping to obtain the PDFs (probability distribution functions) of each feature vector and applies a nonlinear median filtering before mapping to the reference PDF. The algorithm efficiency is analyzed and compared to a recently proposed linear mean filtering technique on the PDFs. From the experimental results it can be concluded that the histogram smoothing through the median nonlinear filtering reduces the mismatch between training data and test, improving the system performance under adverse conditions.},\n  keywords = {nonlinear distortion;smoothing methods;speech recognition;temporal probability distribution;histogram mapping;robust continuous speech recognition;nonlinear distortion;cepstral coefficient domain;additive noise;speech signal;automatic speech recognition;ASR;probability distribution function;PDF;feature vector;nonlinear median filtering;linear mean filtering technique;histogram smoothing;Probability density function;Histograms;Speech recognition;Hidden Markov models;Smoothing methods;Probability distribution;Speech;noise-robust speech recognition;continuous speech recognition},\n  doi = {10.1109/EUSIPCO.2016.7760438},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250887.pdf},\n}\n\n
\n
\n\n\n
\n The nonlinear distortion in the cepstral coefficients domain introduced by additive noise in the speech signal, results in high degradation performance in systems of Automatic Speech Recognition (ASR). For this reason, we propose a median filter which smooths the probability distribution functions of degraded features, thus reducing the mismatch between training data and test. The new proposal uses a histogram mapping to obtain the PDFs (probability distribution functions) of each feature vector and applies a nonlinear median filtering before mapping to the reference PDF. The algorithm efficiency is analyzed and compared to a recently proposed linear mean filtering technique on the PDFs. From the experimental results it can be concluded that the histogram smoothing through the median nonlinear filtering reduces the mismatch between training data and test, improving the system performance under adverse conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Phase-processing for voice activity detection: A statistical approach.\n \n \n \n \n\n\n \n Stahl, J.; Mowlaee, P.; and Kulmer, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1202-1206, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Phase-processingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760439,\n  author = {J. Stahl and P. Mowlaee and J. Kulmer},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Phase-processing for voice activity detection: A statistical approach},\n  year = {2016},\n  pages = {1202-1206},\n  abstract = {Conventional voice activity detectors (VAD) mostly rely on the magnitude of the complex valued DFT spectral coefficients. In this paper, the circular variance of the Discrete Fourier transform (DFT) coefficients is investigated in terms of its ability to represent speech activity in noise. To this end we model the circular variance as a random variable with different underlying distributions for the speech and the noise class. Based on this, we derive a binary hypothesis test relying only on the circular variance estimated from the noisy speech. The experimental results show a reasonable VAD performance justifying that amplitude-independent information can characterize speech in a convenient way.},\n  keywords = {discrete Fourier transforms;signal detection;speech enhancement;statistical analysis;voice activity detection;voice activity detector;phase processing;statistical approach;complex valued DFT spectral coefficient;discrete Fourier transform coefficient;speech activity;binary hypothesis test;circular variance estimation;noisy speech;VAD performance;amplitude-independent information;speech enhancement;Speech;Speech processing;Discrete Fourier transforms;Random variables;Noise measurement;Spectrogram;Voice activity detection;phase spectrum;circular variance;speech enhancement},\n  doi = {10.1109/EUSIPCO.2016.7760439},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251085.pdf},\n}\n\n
\n
\n\n\n
\n Conventional voice activity detectors (VAD) mostly rely on the magnitude of the complex valued DFT spectral coefficients. In this paper, the circular variance of the Discrete Fourier transform (DFT) coefficients is investigated in terms of its ability to represent speech activity in noise. To this end we model the circular variance as a random variable with different underlying distributions for the speech and the noise class. Based on this, we derive a binary hypothesis test relying only on the circular variance estimated from the noisy speech. The experimental results show a reasonable VAD performance justifying that amplitude-independent information can characterize speech in a convenient way.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spoofing attacks to i-vector based voice verification systems using statistical speech synthesis with additive noise and countermeasure.\n \n \n \n \n\n\n \n Özbay, M. C.; Khodabakhsh, A.; Mohammadi, A.; and Demiroğlu, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1207-1211, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SpoofingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760440,\n  author = {M. C. Özbay and A. Khodabakhsh and A. Mohammadi and C. Demiroğlu},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Spoofing attacks to i-vector based voice verification systems using statistical speech synthesis with additive noise and countermeasure},\n  year = {2016},\n  pages = {1207-1211},\n  abstract = {Even though improvements in the speaker verification (SV) technology with i-vectors increased their real-life deployment, their vulnerability to spoofing attacks is a major concern. Here, we investigated the effectiveness of spoofing attacks with statistical speech synthesis systems using limited amount of adaptation data and additive noise. Experiment results show that effective spoofing is possible using limited adaptation data. Moreover, the attacks get substantially more effective when noise is intentionally added to synthetic speech. Training the SV system with matched noise conditions does not alleviate the problem. We propose a synthetic speech detector (SSD) that uses session differences in i-vectors for counterspoofing. The proposed SSD had less than 0.5% total error rate in most cases for the matched noise conditions. For the mismatched noise conditions, missed detection rate further decreased but total error increased which indicates that some calibration is needed for mismatched noise conditions.},\n  keywords = {speaker recognition;speech synthesis;statistical analysis;spoofing attacks;i-vector based voice verification system;statistical speech synthesis;speaker verification technology;synthetic speech detector;SSD;Speech;Noise measurement;Detectors;Training;Signal to noise ratio;Adaptation models;White noise;spoofing attacks;speaker verification;statistical speech synthesis;speaker adaptation;synthetic speech detection},\n  doi = {10.1109/EUSIPCO.2016.7760440},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251179.pdf},\n}\n\n
\n
\n\n\n
\n Even though improvements in the speaker verification (SV) technology with i-vectors increased their real-life deployment, their vulnerability to spoofing attacks is a major concern. Here, we investigated the effectiveness of spoofing attacks with statistical speech synthesis systems using limited amount of adaptation data and additive noise. Experiment results show that effective spoofing is possible using limited adaptation data. Moreover, the attacks get substantially more effective when noise is intentionally added to synthetic speech. Training the SV system with matched noise conditions does not alleviate the problem. We propose a synthetic speech detector (SSD) that uses session differences in i-vectors for counterspoofing. The proposed SSD had less than 0.5% total error rate in most cases for the matched noise conditions. For the mismatched noise conditions, missed detection rate further decreased but total error increased which indicates that some calibration is needed for mismatched noise conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Unsupervised singing voice detection using dictionary learning.\n \n \n \n \n\n\n \n Pikrakis, A.; Kopsinis, Y.; Kroher, N.; and Díaz-Báñez, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1212-1216, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"UnsupervisedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760441,\n  author = {A. Pikrakis and Y. Kopsinis and N. Kroher and J. Díaz-Báñez},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Unsupervised singing voice detection using dictionary learning},\n  year = {2016},\n  pages = {1212-1216},\n  abstract = {This paper presents an unsupervised approach to vocal detection in music recordings based on dictionary learning. At a first stage, the recording to be segmented is treated as training data and the K-SVD algorithm is used to learn a dictionary which sparsely represents a short-term feature sequence that has been extracted from the recording. Subsequently, the vectors of the feature sequence are reconstructed based on the learned dictionary and the probability of appearance of the dictionary atoms is estimated. The obtained probability serves to compute the value of a weight function for each frame of the recording. The histogram of this function is then used to estimate a binarization threshold that segments the recording into vocal and non-vocal segments. The performance of the proposed unsupervised method, when evaluated on two datasets of accompanied singing, presents comparable performance to supervised techniques.},\n  keywords = {acoustic signal detection;feature extraction;probability;signal reconstruction;singular value decomposition;unsupervised learning;unsupervised singing voice detection;dictionary learning;vocal detection;music recording;K-SVD algorithm;short-term feature sequence extraction;feature sequence reconstruction;dictionary atoms appearance probability;weight function;binarization threshold;Dictionaries;Histograms;Feature extraction;Image reconstruction;Training;Signal processing;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760441},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256517.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents an unsupervised approach to vocal detection in music recordings based on dictionary learning. At a first stage, the recording to be segmented is treated as training data and the K-SVD algorithm is used to learn a dictionary which sparsely represents a short-term feature sequence that has been extracted from the recording. Subsequently, the vectors of the feature sequence are reconstructed based on the learned dictionary and the probability of appearance of the dictionary atoms is estimated. The obtained probability serves to compute the value of a weight function for each frame of the recording. The histogram of this function is then used to estimate a binarization threshold that segments the recording into vocal and non-vocal segments. The performance of the proposed unsupervised method, when evaluated on two datasets of accompanied singing, presents comparable performance to supervised techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 2D direction of arrival estimation of multiple moving sources using a spherical microphone array.\n \n \n \n \n\n\n \n Moore, A. H.; Evers, C.; and Naylor, P. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1217-1221, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"2DPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760442,\n  author = {A. H. Moore and C. Evers and P. A. Naylor},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {2D direction of arrival estimation of multiple moving sources using a spherical microphone array},\n  year = {2016},\n  pages = {1217-1221},\n  abstract = {Direction of arrival estimation using a spherical microphone array is an important and growing research area. One promising algorithm is the recently proposed Subspace Pseudo-Intensity Vector method. In this contribution the Subspace Pseudo-Intensity Vector method is combined with a state-of-the-art method for robustly estimating the centres of mass in a 2D histogram based on matching pursuits. The performance of the improved Subspace Pseudo-Intensity Vector method is evaluated in the context of localising multiple moving sources where it is shown to outperform competing methods in terms of clutter rate and the number of missed detections whilst remaining comparable in terms of localisation accuracy.},\n  keywords = {acoustic signal processing;clutter;direction-of-arrival estimation;iterative methods;microphone arrays;source separation;vectors;2D direction of arrival estimation;spherical microphone array;robust centre of mass estimation;2D histogram;matching pursuits;improved subspace pseudointensity vector method;performance evaluation;multiple moving source localisation;clutter rate;Direction-of-arrival estimation;Microphone arrays;Harmonic analysis;Histograms;Signal processing algorithms;Clutter;direction of arrival estimation;localisation;tracking;spherical harmonic domain;subspace pseudo-intensity vectors;PIV},\n  doi = {10.1109/EUSIPCO.2016.7760442},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256100.pdf},\n}\n\n
\n
\n\n\n
\n Direction of arrival estimation using a spherical microphone array is an important and growing research area. One promising algorithm is the recently proposed Subspace Pseudo-Intensity Vector method. In this contribution the Subspace Pseudo-Intensity Vector method is combined with a state-of-the-art method for robustly estimating the centres of mass in a 2D histogram based on matching pursuits. The performance of the improved Subspace Pseudo-Intensity Vector method is evaluated in the context of localising multiple moving sources where it is shown to outperform competing methods in terms of clutter rate and the number of missed detections whilst remaining comparable in terms of localisation accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n ParSec: A PSSS approach to industrial radio with very low and very flexible cycle timing.\n \n \n \n \n\n\n \n Kraemer, R.; Methfessel, M.; Kays, R.; Underberg, L.; and Wolf, A. C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1222-1226, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ParSec:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760443,\n  author = {R. Kraemer and M. Methfessel and R. Kays and L. Underberg and A. C. Wolf},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {ParSec: A PSSS approach to industrial radio with very low and very flexible cycle timing},\n  year = {2016},\n  pages = {1222-1226},\n  abstract = {Industry 4.0 is a subject of current relevance which targets detailed information acquisition from industrial production processes. Wireless communication to ease this acquisition process is highly interesting but suffers today from problems of low reliability, high latency and small flexibility. ParSec addresses these subjects by investigating an innovative, CDMA based approach with very low latency of <; 50 μs, flexible resource block scheduling and BER of ≤ 10-9. While the latency figure is completely based on the assumption that a minimum of 3 symbol durations are needed the BER comes from the requirement specification. Two forms of FEC are used to achieve this requested figure. The radio will be used in the frequency range from 5,725 to 5,875 GHz and work without listen before talk. A rapid prototype has been built to conduct channel measurements and prove the promised properties of the PSSS255 approach. The project is conducted in the framework of other “Industrial Radio” oriented projects supported by the German federal ministry of education and research (BMBF).},\n  keywords = {code division multiple access;error statistics;telecommunication scheduling;very low cycle timing;very flexible cycle timing;industrial radio;ParSec;wireless communication;information acquisition;CDMA approach;flexible resource block scheduling;BER;FEC;channel measurement;PSSS255 approach;German federal ministry of education and research;Industry 4.0;frequency 5725 GHz to 5875 GHz;Wireless communication;Reliability;Security;Automation;Bandwidth;Signal processing;Delays;industrial radio;listen-before talk;PSSS;low-ltency;flexible ressource-block allocation},\n  doi = {10.1109/EUSIPCO.2016.7760443},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256036.pdf},\n}\n\n
\n
\n\n\n
\n Industry 4.0 is a subject of current relevance which targets detailed information acquisition from industrial production processes. Wireless communication to ease this acquisition process is highly interesting but suffers today from problems of low reliability, high latency and small flexibility. ParSec addresses these subjects by investigating an innovative, CDMA based approach with very low latency of <; 50 μs, flexible resource block scheduling and BER of ≤ 10-9. While the latency figure is completely based on the assumption that a minimum of 3 symbol durations are needed the BER comes from the requirement specification. Two forms of FEC are used to achieve this requested figure. The radio will be used in the frequency range from 5,725 to 5,875 GHz and work without listen before talk. A rapid prototype has been built to conduct channel measurements and prove the promised properties of the PSSS255 approach. The project is conducted in the framework of other “Industrial Radio” oriented projects supported by the German federal ministry of education and research (BMBF).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian Cramer-Rao bounds for factorized model based low rank matrix reconstruction.\n \n \n \n \n\n\n \n Sundin, M.; Chatterjee, S.; and Jansson, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1227-1231, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760444,\n  author = {M. Sundin and S. Chatterjee and M. Jansson},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian Cramer-Rao bounds for factorized model based low rank matrix reconstruction},\n  year = {2016},\n  pages = {1227-1231},\n  abstract = {Low-rank matrix reconstruction (LRMR) considers estimation (or reconstruction) of an underlying low-rank matrix from linear measurements. A low-rank matrix can be represented using a factorized model. In this article, we derive Bayesian Cramér-Rao bounds for LRMR where a factorized model is used. We first show a general informative bound, and then derive Bayesian Cramér-Rao bounds for different scenarios. We consider a low-rank random matrix model with hyper-parameters that are - deterministic known, deterministic unknown and random. Finally we compare the bounds with existing estimation algorithms through numerical simulations.},\n  keywords = {Bayes methods;matrix decomposition;signal reconstruction;signal representation;Bayesian Cramer-Rao bounds;low rank matrix reconstruction;LRMR;linear measurements;Bayes methods;Numerical models;Europe;Signal processing;Noise measurement;Sensors;Estimation;Low-rank matrix reconstruction;matrix completion;Bayesian estimation;Cramér-Rao bounds},\n  doi = {10.1109/EUSIPCO.2016.7760444},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256119.pdf},\n}\n\n
\n
\n\n\n
\n Low-rank matrix reconstruction (LRMR) considers estimation (or reconstruction) of an underlying low-rank matrix from linear measurements. A low-rank matrix can be represented using a factorized model. In this article, we derive Bayesian Cramér-Rao bounds for LRMR where a factorized model is used. We first show a general informative bound, and then derive Bayesian Cramér-Rao bounds for different scenarios. We consider a low-rank random matrix model with hyper-parameters that are - deterministic known, deterministic unknown and random. Finally we compare the bounds with existing estimation algorithms through numerical simulations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spatial and time diversities for canonical correlation significance test in spectrum sensing.\n \n \n \n \n\n\n \n Nasser, A.; Mansour, A.; Yao, K. -.; Chaitou, M.; and Charara, H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1232-1236, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SpatialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760445,\n  author = {A. Nasser and A. Mansour and K. -. Yao and M. Chaitou and H. Charara},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Spatial and time diversities for canonical correlation significance test in spectrum sensing},\n  year = {2016},\n  pages = {1232-1236},\n  abstract = {In this paper, we present a new detector for cognitive radio system based on the Canonical Correlation Significance Test (CCST). Unlike existing CCST approaches, which can only be applied on Multi-Antenna System (MAS), our algorithm can be extended for both Single Antenna System (SAS) and MAS. For SAS, the proposed algorithm exploits the time diversity of cyclostationary signals in order to detect the Primary User (PU) signal. Our simulation results shows that our algorithm outperforms well-known cyclostationary algorithm [9]. For MAS, our algorithm uses both spatial and time diversities to apply the CCST. Numerical results are given to illustrate the performance of our algorithm and verify its efficiency for special noise cases (spatially correlated and spatially colored). The simulation results show the superiority of the performance of the proposed detector compared to the recently CCST proposed algorithm [1].},\n  keywords = {antenna arrays;cognitive radio;diversity reception;radio spectrum management;signal detection;canonical correlation significance test;spectrum sensing;cognitive radio system;CCST approach;single antenna system;SAS;MAS;time diversity;spatial diversity;cyclostationary signal;primary user signal detection;PU signal detection;Correlation;Signal processing algorithms;Detectors;Receiving antennas;Synthetic aperture sonar;Canonical Correlation Significance Test;Single Antenna System;Multi-Antenna System;Spatial and Time diversities;Spectrum Sensing;Cognitive Radio},\n  doi = {10.1109/EUSIPCO.2016.7760445},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252404.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present a new detector for cognitive radio system based on the Canonical Correlation Significance Test (CCST). Unlike existing CCST approaches, which can only be applied on Multi-Antenna System (MAS), our algorithm can be extended for both Single Antenna System (SAS) and MAS. For SAS, the proposed algorithm exploits the time diversity of cyclostationary signals in order to detect the Primary User (PU) signal. Our simulation results shows that our algorithm outperforms well-known cyclostationary algorithm [9]. For MAS, our algorithm uses both spatial and time diversities to apply the CCST. Numerical results are given to illustrate the performance of our algorithm and verify its efficiency for special noise cases (spatially correlated and spatially colored). The simulation results show the superiority of the performance of the proposed detector compared to the recently CCST proposed algorithm [1].\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compositional chroma estimation using powered Euclidean distance.\n \n \n \n \n\n\n \n O'Hanlon, K.; and Sandler, M. B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1237-1241, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CompositionalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760446,\n  author = {K. O'Hanlon and M. B. Sandler},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Compositional chroma estimation using powered Euclidean distance},\n  year = {2016},\n  pages = {1237-1241},\n  abstract = {Chroma features are a popular tool in musical signal processing and information retrieval tasks, providing a compact representation of the tonal content of a piece of music. A variety of approaches to chroma estimation have been proposed, most of which rely on the summation of related frequency partials. However, frequency partials may be incorrectly assigned due to the log/linear relationship of frequency and pitch. Variations of chroma employing overtone suppression strategies are found in the literature. We propose a compositional model of chroma, which considers a coarse modelling of the effects of overtones in the expected chroma vectors of single notes. Synthetic chord recognition experiments indicate the usefulness of the proposed approach.},\n  keywords = {acoustic signal processing;music;synthetic chord recognition experiment;coarse modelling;overtone suppression strategy;log-linear relationship;frequency partials;tonal content;information retrieval tasks;musical signal processing;powered Euclidean distance;compositional chroma estimation;Dictionaries;Europe;Estimation;Euclidean distance;Harmonic analysis;Spectrogram;Compositional model;chromagram;powered Euclidean distances;non-negative},\n  doi = {10.1109/EUSIPCO.2016.7760446},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256497.pdf},\n}\n\n
\n
\n\n\n
\n Chroma features are a popular tool in musical signal processing and information retrieval tasks, providing a compact representation of the tonal content of a piece of music. A variety of approaches to chroma estimation have been proposed, most of which rely on the summation of related frequency partials. However, frequency partials may be incorrectly assigned due to the log/linear relationship of frequency and pitch. Variations of chroma employing overtone suppression strategies are found in the literature. We propose a compositional model of chroma, which considers a coarse modelling of the effects of overtones in the expected chroma vectors of single notes. Synthetic chord recognition experiments indicate the usefulness of the proposed approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Outlier-insensitive Kalman smoothing and marginal message passing.\n \n \n \n \n\n\n \n Wadehn, F.; Bruderer, L.; Dauwels, J.; Sahdeva, V.; Yu, H.; and Loeliger, H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1242-1246, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Outlier-insensitivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760447,\n  author = {F. Wadehn and L. Bruderer and J. Dauwels and V. Sahdeva and H. Yu and H. Loeliger},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Outlier-insensitive Kalman smoothing and marginal message passing},\n  year = {2016},\n  pages = {1242-1246},\n  abstract = {We propose a new approach to outlier-insensitive Kalman smoothing based on normal priors with unknown variance (NUV). In contrast to prior work, the actual computations amount essentially to iterations of a standard Kalman smoother (with few extra computations). The proposed approach is easily extended to nonlinear estimation problems by combining the outlier detection with an extended Kalman smoother. For the Kalman smoothing, we consider both a Modified Bryson-Frasier smoother and the recently proposed Backward Information Filter Forward Marginal smoother, neither of which requires matrix inversions.},\n  keywords = {Kalman filters;message passing;smoothing methods;outlier-insensitive Kalman smoothing;marginal message passing;unknown variance;standard Kalman smoother;extended Kalman smoother;modified Bryson-Frasier smoother;backward information filter;forward marginal smoother;Kalman filters;Smoothing methods;Estimation;Standards;Message passing;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760447},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251507.pdf},\n}\n\n
\n
\n\n\n
\n We propose a new approach to outlier-insensitive Kalman smoothing based on normal priors with unknown variance (NUV). In contrast to prior work, the actual computations amount essentially to iterations of a standard Kalman smoother (with few extra computations). The proposed approach is easily extended to nonlinear estimation problems by combining the outlier detection with an extended Kalman smoother. For the Kalman smoothing, we consider both a Modified Bryson-Frasier smoother and the recently proposed Backward Information Filter Forward Marginal smoother, neither of which requires matrix inversions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Sampling approach to sparse approximation problem: Determining degrees of freedom by simulated annealing.\n \n \n \n\n\n \n Obuchi, T.; and Kabashima, Y.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1247-1251, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760448,\n  author = {T. Obuchi and Y. Kabashima},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Sampling approach to sparse approximation problem: Determining degrees of freedom by simulated annealing},\n  year = {2016},\n  pages = {1247-1251},\n  abstract = {The approximation of a high-dimensional vector by a small combination of column vectors selected from a fixed matrix has been actively debated in several different disciplines. In this paper, a sampling approach based on the Monte Carlo method is presented as an efficient solver for such problems. Especially, the use of simulated annealing (SA), a metaheuristic optimization algorithm, for determining degrees of freedom (the number of used columns) by cross validation is focused on and tested. Test on a synthetic model indicates that our SA-based approach can find a nearly optimal solution for the approximation problem and, when combined with the CV framework, it can optimize the generalization ability. Its utility is also confirmed by application to a real-world supernova data set.},\n  keywords = {compressed sensing;matrix algebra;Monte Carlo methods;signal sampling;simulated annealing;vectors;sampling approach;sparse approximation problem;degrees of freedom;simulated annealing;high-dimensional vector;column vectors;fixed matrix;Monte Carlo method;SA-based approach;metaheuristic optimization algorithm;cross validation;CV framework;generalization ability;real-world supernova data set;compressed sensing;Signal processing algorithms;Simulated annealing;Computational efficiency;Europe;Signal processing;Sparse matrices;Schedules},\n  doi = {10.1109/EUSIPCO.2016.7760448},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n The approximation of a high-dimensional vector by a small combination of column vectors selected from a fixed matrix has been actively debated in several different disciplines. In this paper, a sampling approach based on the Monte Carlo method is presented as an efficient solver for such problems. Especially, the use of simulated annealing (SA), a metaheuristic optimization algorithm, for determining degrees of freedom (the number of used columns) by cross validation is focused on and tested. Test on a synthetic model indicates that our SA-based approach can find a nearly optimal solution for the approximation problem and, when combined with the CV framework, it can optimize the generalization ability. Its utility is also confirmed by application to a real-world supernova data set.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online prediction of spatial fields for radio-frequency communication.\n \n \n \n \n\n\n \n Zachariah, D.; Jaldén, N.; and Stoica, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1252-1256, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnlinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760449,\n  author = {D. Zachariah and N. Jaldén and P. Stoica},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Online prediction of spatial fields for radio-frequency communication},\n  year = {2016},\n  pages = {1252-1256},\n  abstract = {In this paper we predict spatial wireless channel characteristics using a stochastic model that takes into account both distance dependent pathloss and random spatial variation due to fading. This information is valuable for resource allocation, interference management, design in wireless communication systems. The spatial field model is trained using a convex covariance-based learning method which can be implemented online. The resulting joint learning and prediction method is suitable for large-scale or streaming data. The online method is first demonstrated on a synthetic dataset which models pathloss and medium-scale fading. We compare the method with a state-of-the-art scalable batch method. It is subsequently tested in a real dataset to capture small-scale variations.},\n  keywords = {covariance analysis;fading channels;radiocommunication;resource allocation;radio-frequency communication;online spatial field prediction;spatial wireless channel characteristic prediction;stochastic model;resource allocation;interference management;wireless communication system;spatial field model;convex covariance-based learning method;joint learning and prediction method;pathloss fading;medium-scale fading;small-scale variation capture;Fading channels;SPICE;Radio frequency;Predictive models;Wireless communication;Transmitters;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760449},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255914.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we predict spatial wireless channel characteristics using a stochastic model that takes into account both distance dependent pathloss and random spatial variation due to fading. This information is valuable for resource allocation, interference management, design in wireless communication systems. The spatial field model is trained using a convex covariance-based learning method which can be implemented online. The resulting joint learning and prediction method is suitable for large-scale or streaming data. The online method is first demonstrated on a synthetic dataset which models pathloss and medium-scale fading. We compare the method with a state-of-the-art scalable batch method. It is subsequently tested in a real dataset to capture small-scale variations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rectified binaural ratio: A complex T-distributed feature for robust sound localization.\n \n \n \n \n\n\n \n Deleforge, A.; and Forbes, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1257-1261, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RectifiedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760450,\n  author = {A. Deleforge and F. Forbes},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Rectified binaural ratio: A complex T-distributed feature for robust sound localization},\n  year = {2016},\n  pages = {1257-1261},\n  abstract = {Most existing methods in binaural sound source localization rely on some kind of aggregation of phase- and level-difference cues in the time-frequency plane. While different aggregation schemes exist, they are often heuristic and suffer in adverse noise conditions. In this paper, we introduce the rectified binaural ratio as a new feature for sound source localization. We show that for Gaussian-process point source signals corrupted by stationary Gaussian noise, this ratio follows a complex t-distribution with explicit parameters. This new formulation provides a principled and statistically sound way to aggregate binaural features in the presence of noise. We subsequently derive two simple and efficient methods for robust relative transfer function and time-delay estimation. Experiments on heavily corrupted simulated and speech signals demonstrate the robustness of the proposed scheme.},\n  keywords = {Gaussian processes;speech processing;rectified binaural ratio;complex T-distributed feature;robust sound localization;binaural sound source localization;Gaussian-process point source signals;stationary Gaussian noise;speech signals;Transfer functions;Microphones;Signal to noise ratio;Robustness;Time-frequency analysis;Estimation;Complex Gaussian ratio;t-distribution;relative transfer function;binaural;sound localization},\n  doi = {10.1109/EUSIPCO.2016.7760450},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256288.pdf},\n}\n\n
\n
\n\n\n
\n Most existing methods in binaural sound source localization rely on some kind of aggregation of phase- and level-difference cues in the time-frequency plane. While different aggregation schemes exist, they are often heuristic and suffer in adverse noise conditions. In this paper, we introduce the rectified binaural ratio as a new feature for sound source localization. We show that for Gaussian-process point source signals corrupted by stationary Gaussian noise, this ratio follows a complex t-distribution with explicit parameters. This new formulation provides a principled and statistically sound way to aggregate binaural features in the presence of noise. We subsequently derive two simple and efficient methods for robust relative transfer function and time-delay estimation. Experiments on heavily corrupted simulated and speech signals demonstrate the robustness of the proposed scheme.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n An approximate message passing algorithm for robust face recognition.\n \n \n \n\n\n \n Zhou, G.; and Dai, W.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1262-1266, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760451,\n  author = {G. Zhou and W. Dai},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {An approximate message passing algorithm for robust face recognition},\n  year = {2016},\n  pages = {1262-1266},\n  abstract = {This paper focuses on algorithmic approaches to solve the robust face recognition problem where the test face image can be corrupted. The standard approach is to formulate the problem as a sparse recovery problem and solve it using ℓ1-minimization. As an alternative, the approximate message passing (AMP) algorithm had been tested but resulted in pessimistic results. Our contribution is to successfully solve this problem using the AMP framework. Recently developed adaptive damping technique has been adopted to address the issue that AMP normally only works well with Gaussian matrices. Statistical models are designed to capture the nature of the signal more authentically. Expectation maximization (EM) method has been used to learn the unknown hyper-parameters of the statistical model in an online fashion. Simulations demonstrate that our method achieves better recognition performance than the impressive benchmark ℓ1-minimization, is robust to the initial values of hyper-parameters, and exhibits low computational cost.},\n  keywords = {expectation-maximisation algorithm;face recognition;learning (artificial intelligence);matrix algebra;message passing;statistical analysis;approximate message passing algorithm;algorithmic approaches;robust face recognition problem;test face image;sparse recovery problem;ℓ1-minimization;AMP algorithm;adaptive damping technique;Gaussian matrices;statistical models;expectation maximization method;EM method;hyperparameter learning;Robustness;Signal processing algorithms;Face recognition;Damping;Training;Adaptation models;Sparse matrices;Approximate message passing (AMP);compressed sensing;robust face recognition;sparse signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760451},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n This paper focuses on algorithmic approaches to solve the robust face recognition problem where the test face image can be corrupted. The standard approach is to formulate the problem as a sparse recovery problem and solve it using ℓ1-minimization. As an alternative, the approximate message passing (AMP) algorithm had been tested but resulted in pessimistic results. Our contribution is to successfully solve this problem using the AMP framework. Recently developed adaptive damping technique has been adopted to address the issue that AMP normally only works well with Gaussian matrices. Statistical models are designed to capture the nature of the signal more authentically. Expectation maximization (EM) method has been used to learn the unknown hyper-parameters of the statistical model in an online fashion. Simulations demonstrate that our method achieves better recognition performance than the impressive benchmark ℓ1-minimization, is robust to the initial values of hyper-parameters, and exhibits low computational cost.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Mean field analysis of sparse reconstruction with correlated variables.\n \n \n \n \n\n\n \n Ramezanali, M.; Mitra, P. P.; and Sengupta, A. M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1267-1271, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MeanPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760452,\n  author = {M. Ramezanali and P. P. Mitra and A. M. Sengupta},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Mean field analysis of sparse reconstruction with correlated variables},\n  year = {2016},\n  pages = {1267-1271},\n  abstract = {Sparse reconstruction algorithms aim to retrieve high-dimensional sparse signals from a limited number of measurements. A common example is LASSO or Basis Pursuit where sparsity is enforced using an ℓ1-penalty together with a cost function ||y - Hx||22. For random design matrices H, a sharp phase transition boundary separates the `good' parameter region where error-free recovery of a sufficiently sparse signal is possible and a `bad' regime where the recovery fails. However, theoretical analysis of phase transition boundary of the correlated variables case lags behind that of uncorrelated variables. Here we use replica trick from statistical physics to show that when an N-dimensional signal x is K-sparse and H is M × N dimensional with the covariance E[HiaHjb] = 1/M CijDab, with all Daa = 1, the perfect recovery occurs at M ~ ψK (D) K log(N/M) in the very sparse limit, where ψK (D) ≥ 1, indicating need for more observations for the same degree of sparsity.},\n  keywords = {matrix algebra;signal reconstruction;statistical analysis;mean field analysis;correlated variables;sparse reconstruction algorithm;high-dimensional sparse signals;LASSO;basis pursuit;ℓ1-penalty;cost function;random design matrices;sharp phase transition boundary;error-free recovery;sparse signal;theoretical analysis;phase transition boundary;statistical physics;N-dimensional signal;sparsity degree;Sparse matrices;Optimization;Signal processing;Covariance matrices;Europe;Correlation;Physics;Compressed sensing;structured matrices;replica method;Basis Pursuit},\n  doi = {10.1109/EUSIPCO.2016.7760452},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256501.pdf},\n}\n\n
\n
\n\n\n
\n Sparse reconstruction algorithms aim to retrieve high-dimensional sparse signals from a limited number of measurements. A common example is LASSO or Basis Pursuit where sparsity is enforced using an ℓ1-penalty together with a cost function ||y - Hx||22. For random design matrices H, a sharp phase transition boundary separates the `good' parameter region where error-free recovery of a sufficiently sparse signal is possible and a `bad' regime where the recovery fails. However, theoretical analysis of phase transition boundary of the correlated variables case lags behind that of uncorrelated variables. Here we use replica trick from statistical physics to show that when an N-dimensional signal x is K-sparse and H is M × N dimensional with the covariance E[HiaHjb] = 1/M CijDab, with all Daa = 1, the perfect recovery occurs at M   ψK (D) K log(N/M) in the very sparse limit, where ψK (D) ≥ 1, indicating need for more observations for the same degree of sparsity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Region-based image retrieval using a joint scalable Bayesian segmentation and feature extraction.\n \n \n \n \n\n\n \n Sakji-Nsibi, S.; and Benazza-Benyahia, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1272-1276, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Region-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760453,\n  author = {S. Sakji-Nsibi and A. Benazza-Benyahia},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Region-based image retrieval using a joint scalable Bayesian segmentation and feature extraction},\n  year = {2016},\n  pages = {1272-1276},\n  abstract = {In this paper, a region based system is designed for textured image retrieval. A scalable joint Bayesian segmentation and feature extraction in the wavelet transform domain is performed. The segmentation map and the extracted region features are refined by exploiting more decomposition levels. In order to account for spatial dependencies, Markov Random Field (MRF) is employed to model the prior distribution of the segmentation map at each scale. Moreover, a coarse to fine resolution retrieval procedure is proposed. Experimental results carried out on remote sensing images corroborate the gain achieved by the proposed indexing method. Moreover, the resort to an adaptive smoothing parameter reflecting the image homogeneity improves the gain provided by the proposed approach.},\n  keywords = {Bayes methods;feature extraction;image resolution;image retrieval;image segmentation;Markov processes;remote sensing;wavelet transforms;region-based image retrieval;joint scalable Bayesian segmentation and feature extraction;wavelet transform domain;spatial dependency;Markov random field;MRF;coarse to line resolution retrieval procedure;remote sensing image;adaptive smoothing parameter;segmentation map prior distribution;Feature extraction;Image segmentation;Bayes methods;Image retrieval;Approximation algorithms;Maximum likelihood estimation},\n  doi = {10.1109/EUSIPCO.2016.7760453},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251829.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a region based system is designed for textured image retrieval. A scalable joint Bayesian segmentation and feature extraction in the wavelet transform domain is performed. The segmentation map and the extracted region features are refined by exploiting more decomposition levels. In order to account for spatial dependencies, Markov Random Field (MRF) is employed to model the prior distribution of the segmentation map at each scale. Moreover, a coarse to fine resolution retrieval procedure is proposed. Experimental results carried out on remote sensing images corroborate the gain achieved by the proposed indexing method. Moreover, the resort to an adaptive smoothing parameter reflecting the image homogeneity improves the gain provided by the proposed approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Image retrieval under very noisy annotations.\n \n \n \n \n\n\n \n Ueki, K.; and Kobayashi, T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1277-1282, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ImagePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760454,\n  author = {K. Ueki and T. Kobayashi},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Image retrieval under very noisy annotations},\n  year = {2016},\n  pages = {1277-1282},\n  abstract = {In recent years, a significant number of tagged images uploaded onto image sharing sites has enabled us to create high-performance image recognition models. However, there are many inaccurate image tags on the Internet, and it is very laborious to investigate the percentage of tags that are incorrect. In this paper, we propose a new method for creating an image recognition model that can be used even when the image data set includes many incorrect tags. Our method has two superior features. First, our method automatically measures the reliability of annotations and does not require any parameter adjustment for the percentage of error tags. This is a very important feature because we usually do not know how many errors are included in the database, especially in actual Internet environments. Second, our method iterates the error modification process. It begins with the modification of simple and obvious errors, gradually deals with much more difficult errors, and finally creates the high-performance recognition model with refined annotations. Using an object recognition image database with many annotation errors, our experiments showed that the proposed method successfully improved the image retrieval performance in approximately 90 percent of the image object categories.},\n  keywords = {image recognition;image retrieval;very-noisy annotation;object recognition image database;error modification process;error tag percentage;annotation reliability;Internet;high-performance image recognition model;image sharing sites;tagged images;image retrieval;Image recognition;Reliability;Training;Data models;Image retrieval},\n  doi = {10.1109/EUSIPCO.2016.7760454},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251887.pdf},\n}\n\n
\n
\n\n\n
\n In recent years, a significant number of tagged images uploaded onto image sharing sites has enabled us to create high-performance image recognition models. However, there are many inaccurate image tags on the Internet, and it is very laborious to investigate the percentage of tags that are incorrect. In this paper, we propose a new method for creating an image recognition model that can be used even when the image data set includes many incorrect tags. Our method has two superior features. First, our method automatically measures the reliability of annotations and does not require any parameter adjustment for the percentage of error tags. This is a very important feature because we usually do not know how many errors are included in the database, especially in actual Internet environments. Second, our method iterates the error modification process. It begins with the modification of simple and obvious errors, gradually deals with much more difficult errors, and finally creates the high-performance recognition model with refined annotations. Using an object recognition image database with many annotation errors, our experiments showed that the proposed method successfully improved the image retrieval performance in approximately 90 percent of the image object categories.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Self-adaptive ground calibration in binocular surveillance system.\n \n \n \n \n\n\n \n Liu, D.; Cai, L.; Zhao, Y.; and Hu, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1283-1287, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Self-adaptivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760455,\n  author = {D. Liu and L. Cai and Y. Zhao and F. Hu},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Self-adaptive ground calibration in binocular surveillance system},\n  year = {2016},\n  pages = {1283-1287},\n  abstract = {Object detection and tracking have always been crucial and challenging topics in computer vision. Compared with monocular vision systems, binocular vision systems (BVSs) have the advantage of dealing with illumination variation, shadow interference, and severe occlusion. Usually, the BVS constructs the world coordinates system by manually calibrating the ground plane. However, the camera vibrations decreases the calibration precision and weakens the system performance. To automatically correct and update the parameters of ground plane, we introduce Linear Discriminant Analysis (LDA) method to analyze the results of object localization and include the feedback in the surveillance system, in this way, a close loop system that greatly improves the accuracy and stability of surveillance system is constructed. Experimental results demonstrate that our approach works well in BVS for video surveillance.},\n  keywords = {calibration;computer vision;feedback;object detection;object tracking;video surveillance;self-adaptive ground calibration;binocular surveillance system;object detection;object tracking;computer vision;binocular vision system;BVS;shadow interference;severe occlusion;illumination variation;linear discriminant analysis method;LDA method;object localization;close loop system;surveillance system stability;video surveillance;camera vibrations;Cameras;Calibration;Machine vision;Three-dimensional displays;Vibrations;Video surveillance;Binocular vision;Multi-object tracking;Self-adaptive ground calibration;Linear Discriminant Analysis},\n  doi = {10.1109/EUSIPCO.2016.7760455},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252010.pdf},\n}\n\n
\n
\n\n\n
\n Object detection and tracking have always been crucial and challenging topics in computer vision. Compared with monocular vision systems, binocular vision systems (BVSs) have the advantage of dealing with illumination variation, shadow interference, and severe occlusion. Usually, the BVS constructs the world coordinates system by manually calibrating the ground plane. However, the camera vibrations decreases the calibration precision and weakens the system performance. To automatically correct and update the parameters of ground plane, we introduce Linear Discriminant Analysis (LDA) method to analyze the results of object localization and include the feedback in the surveillance system, in this way, a close loop system that greatly improves the accuracy and stability of surveillance system is constructed. Experimental results demonstrate that our approach works well in BVS for video surveillance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Video semantic indexing using object detection-derived features.\n \n \n \n \n\n\n \n Kikuchi, K.; Ueki, K.; Ogawa, T.; and Kobayashi, T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1288-1292, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"VideoPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760456,\n  author = {K. Kikuchi and K. Ueki and T. Ogawa and T. Kobayashi},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Video semantic indexing using object detection-derived features},\n  year = {2016},\n  pages = {1288-1292},\n  abstract = {A new feature extraction method based on object detection to achieve accurate and robust semantic indexing of videos is proposed. Local features (e.g., SIFT and HOG) and convolutional neural network (CNN)-derived features, which have been used in semantic indexing, in general are extracted from the entire image and do not explicitly represent the information of meaningful objects that contributes to the determination of semantic categories. In this case, the background region, which does not contain the meaningful objects, is unduly considered, exerting a harmful effect on the indexing performance. In the present study, an attempt was made to suppress the undesirable effects derived from the redundant background information by incorporating object detection technology into semantic indexing. In the proposed method, a combination of the meaningful objects detected in the video frame image is represented as a feature vector for verification of semantic categories. Experimental comparisons demonstrate that the proposed method facilitates the TRECVID semantic indexing task.},\n  keywords = {feature extraction;neural nets;object detection;video retrieval;video signal processing;object detection-derived feature extraction;robust video semantic indexing;convolutional neural network-derived feature extraction;CNN-derived feature extraction;background region;harmful effect;background information redundancy;feature vector;TRECVID semantic indexing task;Semantics;Feature extraction;Indexing;Object detection;Bicycles;Automobiles},\n  doi = {10.1109/EUSIPCO.2016.7760456},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252022.pdf},\n}\n\n
\n
\n\n\n
\n A new feature extraction method based on object detection to achieve accurate and robust semantic indexing of videos is proposed. Local features (e.g., SIFT and HOG) and convolutional neural network (CNN)-derived features, which have been used in semantic indexing, in general are extracted from the entire image and do not explicitly represent the information of meaningful objects that contributes to the determination of semantic categories. In this case, the background region, which does not contain the meaningful objects, is unduly considered, exerting a harmful effect on the indexing performance. In the present study, an attempt was made to suppress the undesirable effects derived from the redundant background information by incorporating object detection technology into semantic indexing. In the proposed method, a combination of the meaningful objects detected in the video frame image is represented as a feature vector for verification of semantic categories. Experimental comparisons demonstrate that the proposed method facilitates the TRECVID semantic indexing task.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sequential stack decoder for multichannel image restoration.\n \n \n \n \n\n\n \n Boudjenouia, F.; Jennane, R.; Abed-Meraim, K.; and Chetouani, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1293-1297, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SequentialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760457,\n  author = {F. Boudjenouia and R. Jennane and K. Abed-Meraim and A. Chetouani},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Sequential stack decoder for multichannel image restoration},\n  year = {2016},\n  pages = {1293-1297},\n  abstract = {In this paper, we propose a novel scheme for image restoration (IR) employing a sequential decoding technique based on a tree search, known as Stack algorithm. The latter is a well-known method used for 1D signal decoding in wireless communication systems. The main idea is to extend the Stack algorithm for image restoration (2D) and to exploit the information diversity conveyed by the channels (Multichannel) in order to restore the original image. To deal with the noisy case, a regularization term is introduced using the total variation and the wavelet transform. This method was tested on artificially degraded images (blurred and noisy). Obtained results confirm the relevance of the proposed approach.},\n  keywords = {image coding;image restoration;sequential decoding;tree searching;wavelet transforms;wireless channels;multichannel image restoration;sequential decoding technique;tree search;sequential stack decoder;stack algorithm;1D signal decoding;wireless communication system;information diversity;total variation;wavelet transform;blurred image;noisy image;artificially degraded image;Decoding;Image restoration;Signal processing algorithms;Matrix decomposition;Noise measurement;TV;Image restoration;Multichannel;Sequential decoding;Stack algorithm;Regularization},\n  doi = {10.1109/EUSIPCO.2016.7760457},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252082.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a novel scheme for image restoration (IR) employing a sequential decoding technique based on a tree search, known as Stack algorithm. The latter is a well-known method used for 1D signal decoding in wireless communication systems. The main idea is to extend the Stack algorithm for image restoration (2D) and to exploit the information diversity conveyed by the channels (Multichannel) in order to restore the original image. To deal with the noisy case, a regularization term is introduced using the total variation and the wavelet transform. This method was tested on artificially degraded images (blurred and noisy). Obtained results confirm the relevance of the proposed approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Real-time UHD scalable multi-layer HEVC encoder architecture.\n \n \n \n \n\n\n \n Parois, R.; Hamidouche, W.; Mora, E. G.; Raulet, M.; and Deforges, O.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1298-1302, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Real-timePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760458,\n  author = {R. Parois and W. Hamidouche and E. G. Mora and M. Raulet and O. Deforges},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Real-time UHD scalable multi-layer HEVC encoder architecture},\n  year = {2016},\n  pages = {1298-1302},\n  abstract = {The High Efficiency Video Coding (HEVC) standard enables meeting new video quality demands such as Ultra High Definition (UHD). Its scalable extension (SHVC) allows encoding simultaneously different versions of a video, organised in layers. Thanks to inter-layer predictions, SHVC provides bit-rate savings over an equivalent HEVC simulcast encoding. Therefore, SHVC seems a promising solution for both broadcast and storage purposes. This paper proposes a multi-layer architecture of a pipeline of software HEVC encoder to perform real-time UHD spatially-scalable SHVC encoding. Inter-layer predictions are furthermore implemented to provide bit-rate savings with a minimum impact on complexity. The proposed architecture provides a good trade-off between coding gains and coding speed achieving real-time performance for 1080p60 and 1600p30 sequences in 2× spatial scalability. Moreover, experimental results show more than a 1000× speed-up compared to the SHVC reference software (SHM) and an introduced delay only reaching 14% of the equivalent HEVC coding speed.},\n  keywords = {high definition video;video coding;software HEVC encoder;ultra-high definition;real-time UHD scalable multi-layer HEVC encoder architecture;Encoding;Pipelines;Real-time systems;Video coding;Streaming media;Standards;Software},\n  doi = {10.1109/EUSIPCO.2016.7760458},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252094.pdf},\n}\n\n
\n
\n\n\n
\n The High Efficiency Video Coding (HEVC) standard enables meeting new video quality demands such as Ultra High Definition (UHD). Its scalable extension (SHVC) allows encoding simultaneously different versions of a video, organised in layers. Thanks to inter-layer predictions, SHVC provides bit-rate savings over an equivalent HEVC simulcast encoding. Therefore, SHVC seems a promising solution for both broadcast and storage purposes. This paper proposes a multi-layer architecture of a pipeline of software HEVC encoder to perform real-time UHD spatially-scalable SHVC encoding. Inter-layer predictions are furthermore implemented to provide bit-rate savings with a minimum impact on complexity. The proposed architecture provides a good trade-off between coding gains and coding speed achieving real-time performance for 1080p60 and 1600p30 sequences in 2× spatial scalability. Moreover, experimental results show more than a 1000× speed-up compared to the SHVC reference software (SHM) and an introduced delay only reaching 14% of the equivalent HEVC coding speed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust reconstruction for CS-based fetal beats detection.\n \n \n \n \n\n\n \n Da Poian, G.; Bernardini, R.; and Rinaldo, R.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1303-1307, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760459,\n  author = {G. {Da Poian} and R. Bernardini and R. Rinaldo},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust reconstruction for CS-based fetal beats detection},\n  year = {2016},\n  pages = {1303-1307},\n  abstract = {Due to its possible low-power implementation, Compressed Sensing (CS) is an attractive tool for physiological signal acquisition in emerging scenarios like Wireless Body Sensor Networks (WBSN) and telemonitoring applications. In this work we consider the continuous monitoring and analysis of the fetal ECG signal (fECG). We propose a modification of the low-complexity CS reconstruction SL0 algorithm, improving its robustness in the presence of noisy original signals and possibly ill-conditioned sensing/reconstruction procedures. We show that, while maintaining the same computational cost of the original algorithm, the proposed modification significantly improves the reconstruction quality, both for synthetic and real-world ECG signals. We also show that the proposed algorithm allows robust heart beat classification when sparse matrices, implementable with very low computational complexity, are used for compressed sensing of the ECG signal.},\n  keywords = {body sensor networks;compressed sensing;computational complexity;electrocardiography;robust reconstruction;CS-based fetal beats detection;compressed sensing;wireless body sensor networks;WBSN;fetal ECG signal;low-complexity CS reconstruction SL0 algorithm;reconstruction quality;real-world ECG signals;heart beat classification;computational complexity;Signal processing algorithms;Sparse matrices;Sensors;Approximation algorithms;Dictionaries;Signal to noise ratio;Electrocardiography;Non-invasive Fetal ECG;Compressive sensing;Sparse representations},\n  doi = {10.1109/EUSIPCO.2016.7760459},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251870.pdf},\n}\n\n
\n
\n\n\n
\n Due to its possible low-power implementation, Compressed Sensing (CS) is an attractive tool for physiological signal acquisition in emerging scenarios like Wireless Body Sensor Networks (WBSN) and telemonitoring applications. In this work we consider the continuous monitoring and analysis of the fetal ECG signal (fECG). We propose a modification of the low-complexity CS reconstruction SL0 algorithm, improving its robustness in the presence of noisy original signals and possibly ill-conditioned sensing/reconstruction procedures. We show that, while maintaining the same computational cost of the original algorithm, the proposed modification significantly improves the reconstruction quality, both for synthetic and real-world ECG signals. We also show that the proposed algorithm allows robust heart beat classification when sparse matrices, implementable with very low computational complexity, are used for compressed sensing of the ECG signal.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Coupled tensor decomposition: A step towards robust components.\n \n \n \n \n\n\n \n Genicot, M.; Absil, P. -.; Lambiotte, R.; and Sami, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1308-1312, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CoupledPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760460,\n  author = {M. Genicot and P. -. Absil and R. Lambiotte and S. Sami},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Coupled tensor decomposition: A step towards robust components},\n  year = {2016},\n  pages = {1308-1312},\n  abstract = {Combining information present in multiple datasets is one of the key challenges to fully benefit from the increasing availability of data in a variety of fields. Coupled tensor factorization aims to address this challenge by performing a simultaneous decomposition of different tensors. However, tensor factorization tends to suffer from a lack of robustness as the number of components affects the results to a large extent. In this work, a general framework for coupled tensor factorization is built to extract reliable components. Results from both individual and coupled decompositions are compared and divergence measures are used to adapt the number of components. It results in a joint decomposition method with (i) a variable number of components, (ii) shared and unshared components among tensors and (iii) robust components. Results on simulated data show a better modelling of the sources composing the datasets and an improved evaluation of the number of shared sources.},\n  keywords = {matrix decomposition;tensors;coupled tensor decomposition;multiple datasets;coupled tensor factorization;joint decomposition method;variable number;Tensile stress;Matrix decomposition;Robustness;Convergence;Signal processing algorithms;Mathematical model;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760460},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252314.pdf},\n}\n\n
\n
\n\n\n
\n Combining information present in multiple datasets is one of the key challenges to fully benefit from the increasing availability of data in a variety of fields. Coupled tensor factorization aims to address this challenge by performing a simultaneous decomposition of different tensors. However, tensor factorization tends to suffer from a lack of robustness as the number of components affects the results to a large extent. In this work, a general framework for coupled tensor factorization is built to extract reliable components. Results from both individual and coupled decompositions are compared and divergence measures are used to adapt the number of components. It results in a joint decomposition method with (i) a variable number of components, (ii) shared and unshared components among tensors and (iii) robust components. Results on simulated data show a better modelling of the sources composing the datasets and an improved evaluation of the number of shared sources.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic detection of laser marks in retinal digital fundus images.\n \n \n \n \n\n\n \n Almeida e Sousa, J. G. R.; Oliveira, C. M.; and da Silva Cruz , L. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1313-1317, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760461,\n  author = {J. G. R. {Almeida e Sousa} and C. M. Oliveira and L. A. {da Silva Cruz}},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic detection of laser marks in retinal digital fundus images},\n  year = {2016},\n  pages = {1313-1317},\n  abstract = {Diabetic retinopathy (DR) is the most frequent complication of diabetes mellitus that affects vision to the point of causing blindness. In advanced stages its progress can be delayed with laser photocoagulation which leaves behind marks on the retina. Modern screening programs rely on automatic diagnostic algorithms to detect signs of DR in patients. These systems performance may be impaired when patient retina presents marks from previous laser photocoagulation treatments. Since these patients are already being treated, it is desirable to detect and remove them from the screening program. An algorithm that automatically detects the presence of laser marks in retinal images using tree-based classifiers is proposed and the results on its performance are obtained and described. Two new public accessible datasets containing retinal images with laser marks are provided in this paper.},\n  keywords = {biomedical optical imaging;diseases;image classification;laser applications in medicine;medical image processing;radiation therapy;utomatic laser mark detection;retinal digital fundus images;diabetic retinopathy;diabetes mellitus;blindness;laser photocoagulation treatments;tree-based classifiers;Lasers;Retina;Signal processing algorithms;Image segmentation;Feature extraction;Diabetes;Lesions;Diabetes;Biomedical image processing;Feature extraction;Classification algorithms},\n  doi = {10.1109/EUSIPCO.2016.7760461},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252335.pdf},\n}\n\n
\n
\n\n\n
\n Diabetic retinopathy (DR) is the most frequent complication of diabetes mellitus that affects vision to the point of causing blindness. In advanced stages its progress can be delayed with laser photocoagulation which leaves behind marks on the retina. Modern screening programs rely on automatic diagnostic algorithms to detect signs of DR in patients. These systems performance may be impaired when patient retina presents marks from previous laser photocoagulation treatments. Since these patients are already being treated, it is desirable to detect and remove them from the screening program. An algorithm that automatically detects the presence of laser marks in retinal images using tree-based classifiers is proposed and the results on its performance are obtained and described. Two new public accessible datasets containing retinal images with laser marks are provided in this paper.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the optimal reconstruction of dMRI images with multi-coil acquisition system.\n \n \n \n \n\n\n \n Sid, F. A.; Oulebsir-Boumghar, F.; Abed-Meraim, K.; and Harba, R.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1318-1322, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760462,\n  author = {F. A. Sid and F. Oulebsir-Boumghar and K. Abed-Meraim and R. Harba},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On the optimal reconstruction of dMRI images with multi-coil acquisition system},\n  year = {2016},\n  pages = {1318-1322},\n  abstract = {In this paper, we consider a multi-coil diffusion MRI system and compare the achievable performance bounds for two image reconstruction methods using, respectively, the Matched Filtering (MF) and the Sum-of-Squares (SoS) techniques. This performance comparison is related to the parameter estimation accuracy of the multi-tensor diffusion model expressed in terms of Cramér-Rao Bounds (CRB). In particular, this analysis allows us to thoroughly quantify the large gain in favor of the MF approach and to illustrate the significant acquisition time reduction we can obtain if we replace the standard SoS technique by the MF-based one.},\n  keywords = {biomedical MRI;image filtering;image reconstruction;matched filters;tensors;dMRI image optimal reconstruction;multicoil acquisition system;multicoil diffusion MRI system;matched filtering;MF;sum-of-square technique;SoS technique;multitensor diffusion parameter estimation accuracy;Cramer-Rao bound;CRB;acquisition time reduction;Coils;Image reconstruction;Signal to noise ratio;Magnetic resonance imaging;Tensile stress;Sensitivity;Parameter estimation;Cramér-Rao Bound;matched filter;sum-of-squares;Nc-Chi distribution;DT model},\n  doi = {10.1109/EUSIPCO.2016.7760462},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252336.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we consider a multi-coil diffusion MRI system and compare the achievable performance bounds for two image reconstruction methods using, respectively, the Matched Filtering (MF) and the Sum-of-Squares (SoS) techniques. This performance comparison is related to the parameter estimation accuracy of the multi-tensor diffusion model expressed in terms of Cramér-Rao Bounds (CRB). In particular, this analysis allows us to thoroughly quantify the large gain in favor of the MF approach and to illustrate the significant acquisition time reduction we can obtain if we replace the standard SoS technique by the MF-based one.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Source imaging of simple finger movements captured during auditory response tasks: Estimation of variation in neural information flow under varying vigilance levels.\n \n \n \n\n\n \n Chaudhuri, A.; and Routray, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1323-1327, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760463,\n  author = {A. Chaudhuri and A. Routray},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Source imaging of simple finger movements captured during auditory response tasks: Estimation of variation in neural information flow under varying vigilance levels},\n  year = {2016},\n  pages = {1323-1327},\n  abstract = {Neural Information Flow during simple cognitive or motor actions is very recent research topic. One of the simple movements to be studied is a simple finger movement. In this paper, Electro-encephalograph (EEG) Source Imaging has been used to model the Neural Information flow, during performance of a simple Auditory Response Task (ART). The estimated sources are firstly divided into separate scouts according to the Destrieux Atlas. Then, relative power carried in each scout at each instant has been used to create a neural information flow map. Another key feature of this paper lies in the fact that the database corresponded of subjects performing a group of tasks repetitively until significant drop in vigilance. This provides a unique point of view, as to how drop in vigilance levels affect the Neural Information flow. The results show a correlation between trends reflected in EEG parameters and performance scores of neuropsychological tests.},\n  keywords = {cognition;electroencephalography;hearing;neurophysiology;psychology;finger movements;auditory response tasks;neural information flow;cognitive actions;motor actions;electroencephalograph source imaging;EEG source imaging;ART;Destrieux Atlas;EEG parameters;neuropsychological tests;Electroencephalography;DC motors;Imaging;Brain modeling;Fingers;Subspace constraints;Signal processing;Source Imaging;Simple Finger Movements;EEG;Auditory Response;Neural Information Flow},\n  doi = {10.1109/EUSIPCO.2016.7760463},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n Neural Information Flow during simple cognitive or motor actions is very recent research topic. One of the simple movements to be studied is a simple finger movement. In this paper, Electro-encephalograph (EEG) Source Imaging has been used to model the Neural Information flow, during performance of a simple Auditory Response Task (ART). The estimated sources are firstly divided into separate scouts according to the Destrieux Atlas. Then, relative power carried in each scout at each instant has been used to create a neural information flow map. Another key feature of this paper lies in the fact that the database corresponded of subjects performing a group of tasks repetitively until significant drop in vigilance. This provides a unique point of view, as to how drop in vigilance levels affect the Neural Information flow. The results show a correlation between trends reflected in EEG parameters and performance scores of neuropsychological tests.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multivariate classification of fourier transform infrared hyperspectral images of skin cancer cells.\n \n \n \n \n\n\n \n Peñaranda, F.; Naranjo, V.; Kastl, L.; Kemper, B.; Lloyd, G. R.; Nallala, J.; Stone, N.; and Schnekenburger, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1328-1332, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MultivariatePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760464,\n  author = {F. Peñaranda and V. Naranjo and L. Kastl and B. Kemper and G. R. Lloyd and J. Nallala and N. Stone and J. Schnekenburger},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Multivariate classification of fourier transform infrared hyperspectral images of skin cancer cells},\n  year = {2016},\n  pages = {1328-1332},\n  abstract = {A multilevel framework for the multiclass classification of spectra extracted from Fourier transform infrared images is described. This learning structure was employed to discriminate the spectra extracted from hyperspectral images of two batches of four different skin cultured cells (two normal and two tumor), where the cells of one batch had been stained with fluorescence live cell dyes. Different options were explored in each stage of the framework, specifically in the spectral pre-processing and the employed classification algorithm. Special care was taken to optimize the learning models and to objectively estimate the generalization performance by means of cross-validation. A very high discriminative performance was obtained for all the unstained skin cell types. However, the presence of the stains introduces spectral artifacts that worsen the class separation, as has been demonstrated in several classification experiments.},\n  keywords = {biomedical optical imaging;cancer;cellular biophysics;Fourier transform infrared spectra;hyperspectral imaging;image classification;medical image processing;skin;skin cell type;spectral preprocessing;fluorescence live cell dye;skin cell culture;spectra classification;skin cancer cell;Fourier transform infrared hyperspectral image classification;Skin;Hyperspectral imaging;Cells (biology);Feature extraction;Substrates;Biomedical imaging;Cancer},\n  doi = {10.1109/EUSIPCO.2016.7760464},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256192.pdf},\n}\n\n
\n
\n\n\n
\n A multilevel framework for the multiclass classification of spectra extracted from Fourier transform infrared images is described. This learning structure was employed to discriminate the spectra extracted from hyperspectral images of two batches of four different skin cultured cells (two normal and two tumor), where the cells of one batch had been stained with fluorescence live cell dyes. Different options were explored in each stage of the framework, specifically in the spectral pre-processing and the employed classification algorithm. Special care was taken to optimize the learning models and to objectively estimate the generalization performance by means of cross-validation. A very high discriminative performance was obtained for all the unstained skin cell types. However, the presence of the stains introduces spectral artifacts that worsen the class separation, as has been demonstrated in several classification experiments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Real-time vibration control of an electrolarynx based on statistical F0 contour prediction.\n \n \n \n \n\n\n \n Tanaka, K.; Toda, T.; Neubig, G.; and Nakamura, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1333-1337, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Real-timePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760465,\n  author = {K. Tanaka and T. Toda and G. Neubig and S. Nakamura},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Real-time vibration control of an electrolarynx based on statistical F0 contour prediction},\n  year = {2016},\n  pages = {1333-1337},\n  abstract = {An electrolarynx is a speaking aid device to artificially generate excitation sounds to help laryngectomees produce electrolaryngeal (EL) speech. Although EL speech is quite intelligible, its naturalness significantly suffers from the unnatural fundamental frequency (F0) patterns of the mechanical excitation sounds. To make it possible to produce more naturally sounding EL speech, we have proposed a method to automatically control F0 patterns of the excitation sounds generated from the electrolarynx based on the statistical F0 prediction, which predicts F0 patterns from the produced EL speech in real-time. In our previous work, we have developed a prototype system by implementing the proposed real-time prediction method in an actual, physical electrolarynx, and through the use of the prototype system, we have found that improvements of the naturalness of EL speech yielded by the prototype system tend to be lower than that yielded by the batch-type prediction. In this paper, we examine negative impacts caused by latency of the real-time prediction on the F0 prediction accuracy, and to alleviate them, we also propose two methods, 1) modeling of segmented continuous F0 (CF0) patterns and 2) prediction of forthcoming F0 values. The experimental results demonstrate that 1) the conventional real-time prediction method needs a large delay to predict CF0 patterns and 2) the proposed methods have positive impacts on the real-time prediction.},\n  keywords = {handicapped aids;prosthetics;speech;vibration control;real-time vibration control;statistical F0 contour prediction;speaking aid device;laryngectomees;electrolaryngeal speech;EL speech;unnatural fundamental frequency patterns;mechanical excitation sounds;actual electrolarynx;physical electrolarynx;batch-type prediction;F0 prediction accuracy;segmented continuous F0 patterns;forthcoming F0 values;conventional real-time prediction method;Speech;Real-time systems;Delays;Prototypes;Digital-analog conversion;Prediction methods;Control systems},\n  doi = {10.1109/EUSIPCO.2016.7760465},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252097.pdf},\n}\n\n
\n
\n\n\n
\n An electrolarynx is a speaking aid device to artificially generate excitation sounds to help laryngectomees produce electrolaryngeal (EL) speech. Although EL speech is quite intelligible, its naturalness significantly suffers from the unnatural fundamental frequency (F0) patterns of the mechanical excitation sounds. To make it possible to produce more naturally sounding EL speech, we have proposed a method to automatically control F0 patterns of the excitation sounds generated from the electrolarynx based on the statistical F0 prediction, which predicts F0 patterns from the produced EL speech in real-time. In our previous work, we have developed a prototype system by implementing the proposed real-time prediction method in an actual, physical electrolarynx, and through the use of the prototype system, we have found that improvements of the naturalness of EL speech yielded by the prototype system tend to be lower than that yielded by the batch-type prediction. In this paper, we examine negative impacts caused by latency of the real-time prediction on the F0 prediction accuracy, and to alleviate them, we also propose two methods, 1) modeling of segmented continuous F0 (CF0) patterns and 2) prediction of forthcoming F0 values. The experimental results demonstrate that 1) the conventional real-time prediction method needs a large delay to predict CF0 patterns and 2) the proposed methods have positive impacts on the real-time prediction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modeling unvoiced sounds in statistical parametric speech synthesis with a continuous vocoder.\n \n \n \n \n\n\n \n Csapó, T. G.; Németh, G.; Cernak, M.; and Garner, P. N.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1338-1342, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ModelingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760466,\n  author = {T. G. Csapó and G. Németh and M. Cernak and P. N. Garner},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Modeling unvoiced sounds in statistical parametric speech synthesis with a continuous vocoder},\n  year = {2016},\n  pages = {1338-1342},\n  abstract = {In this paper, we introduce an improved excitation model for statistical parametric speech synthesis. Our earlier vocoder [1], which applies continuous F0 in combination with Maximum Voiced Frequency (MVF), is extended. The focus of this paper is on the modeling of unvoiced consonants, for which two alternative methods are proposed. The first method applies no postprocessing during MVF estimation to reduce the unwanted voiced component of unvoiced speech sounds. The second separates voiced and unvoiced excitation based on the phonetic labels of the text to be synthesized. In an objective experiment we found that the first method produces unvoiced sounds that are closer to natural speech in terms of Harmonics-to-Noise Ratio. A subjective listening test showed that both methods are more natural than our baseline system, and the second method is significantly preferred.},\n  keywords = {harmonic analysis;source separation;speech synthesis;vocoders;unvoiced sound modelling;continuous vocoder;statistical parametric speech synthesis;maximum voiced frequency;MVF estimation;harmonics-to-noise ratio;Hidden Markov models;Vocoders;Speech;Estimation;Analytical models;Speech synthesis;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760466},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252130.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we introduce an improved excitation model for statistical parametric speech synthesis. Our earlier vocoder [1], which applies continuous F0 in combination with Maximum Voiced Frequency (MVF), is extended. The focus of this paper is on the modeling of unvoiced consonants, for which two alternative methods are proposed. The first method applies no postprocessing during MVF estimation to reduce the unwanted voiced component of unvoiced speech sounds. The second separates voiced and unvoiced excitation based on the phonetic labels of the text to be synthesized. In an objective experiment we found that the first method produces unvoiced sounds that are closer to natural speech in terms of Harmonics-to-Noise Ratio. A subjective listening test showed that both methods are more natural than our baseline system, and the second method is significantly preferred.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust phonetic segmentation using multi-taper spectral estimation for noisy and clipped speech.\n \n \n \n \n\n\n \n Vachhani, B.; Bhat, C.; and Kopparapu, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1343-1347, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760467,\n  author = {B. Vachhani and C. Bhat and S. Kopparapu},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust phonetic segmentation using multi-taper spectral estimation for noisy and clipped speech},\n  year = {2016},\n  pages = {1343-1347},\n  abstract = {Robust phonetic segmentation is extremely important for several speech processing tasks such as phone level articulation analysis and error detection, speech synthesis, and annotation. In this paper, we present an unsupervised phonetic segmentation approach and its application to noisy and clipped speech such as mobile phone recordings. We propose a multi-taper-based Perceptual Linear Prediction (PLP) speech processing front-end, together with Spectral Transition Measure (STM) and a novel post-processing technique, to improve over the baseline STM technique. Performance of the proposed technique has been evaluated using precision, recall and F-score measures. Experimental results show an absolute improvement of 11% for TIMIT and 18% for Hindi speech data (clean) over the baseline approach. Significant improvement in phonetic segmentation was observed for noisy speech - simulated as well as mobile phone recordings.},\n  keywords = {speech processing;speech synthesis;robust phonetic segmentation;Hindi speech data;TIMIT;F-score measure;baseline STM technique;STM;spectral transition measure;perceptual linear prediction;multitaper-based PLP speech processing front-end;mobile phone recordings;speech annotation;speech synthesis;error detection;phone level articulation analysis;clipped speech;noisy speech;multitaper spectral estimation;Speech;Speech processing;Noise measurement;Robustness;Estimation;Signal processing algorithms;Spectral Transition Measure;Multi-taper;Perceptual Linear Prediction;Clipping;Babble Noise},\n  doi = {10.1109/EUSIPCO.2016.7760467},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252138.pdf},\n}\n\n
\n
\n\n\n
\n Robust phonetic segmentation is extremely important for several speech processing tasks such as phone level articulation analysis and error detection, speech synthesis, and annotation. In this paper, we present an unsupervised phonetic segmentation approach and its application to noisy and clipped speech such as mobile phone recordings. We propose a multi-taper-based Perceptual Linear Prediction (PLP) speech processing front-end, together with Spectral Transition Measure (STM) and a novel post-processing technique, to improve over the baseline STM technique. Performance of the proposed technique has been evaluated using precision, recall and F-score measures. Experimental results show an absolute improvement of 11% for TIMIT and 18% for Hindi speech data (clean) over the baseline approach. Significant improvement in phonetic segmentation was observed for noisy speech - simulated as well as mobile phone recordings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Continuous fundamental frequency prediction with deep neural networks.\n \n \n \n \n\n\n \n Tóth, B. P.; and Csapó, T. G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1348-1352, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ContinuousPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760468,\n  author = {B. P. Tóth and T. G. Csapó},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Continuous fundamental frequency prediction with deep neural networks},\n  year = {2016},\n  pages = {1348-1352},\n  abstract = {Deep learning is proven to outperform other machine learning methods in numerous research fields. However, previous approaches, like multispace probability distribution hidden Markov models still surpass deep learning methods in the prediction accuracy of speech fundamental frequency (F0), inter alia, due to its discontinuous behavior. The current research focuses on the application of feedforward deep neural networks (DNNs) for modeling continuous F0 extracted by a recent vocoding technique. In order to achieve lower validation error, hyperparameter optimization with manual grid search was carried out. The results of objective and subjective evaluations show that using continuous F0 trajectories, DNNs can reach the modeling performance of previous state-of-the-art solutions. The complexity of DNN architectures could be reduced in case of continuous F0 contours as well.},\n  keywords = {feedforward neural nets;optimisation;search problems;speech coding;continuous fundamental frequency prediction;deep neural networks;machine learning;feedforward deep neural networks;continuous F0 modeling;vocoding technique;validation error;hyperparameter optimization;grid search;continuous F0 trajectories;DNN architectures;continuous F0 contours;Hidden Markov models;Vocoders;Speech;Neural networks;Training;Correlation;Optimization;feedforward deep neural networks;speech synthesis;fundamental frequency;F0},\n  doi = {10.1109/EUSIPCO.2016.7760468},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252139.pdf},\n}\n\n
\n
\n\n\n
\n Deep learning is proven to outperform other machine learning methods in numerous research fields. However, previous approaches, like multispace probability distribution hidden Markov models still surpass deep learning methods in the prediction accuracy of speech fundamental frequency (F0), inter alia, due to its discontinuous behavior. The current research focuses on the application of feedforward deep neural networks (DNNs) for modeling continuous F0 extracted by a recent vocoding technique. In order to achieve lower validation error, hyperparameter optimization with manual grid search was carried out. The results of objective and subjective evaluations show that using continuous F0 trajectories, DNNs can reach the modeling performance of previous state-of-the-art solutions. The complexity of DNN architectures could be reduced in case of continuous F0 contours as well.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n High spatiotemporal cineMRI films using compressed sensing for acquiring articulatory data.\n \n \n \n \n\n\n \n Elie, B.; Laprie, Y.; Vuissoz, P.; and Odille, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1353-1357, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"HighPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760469,\n  author = {B. Elie and Y. Laprie and P. Vuissoz and F. Odille},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {High spatiotemporal cineMRI films using compressed sensing for acquiring articulatory data},\n  year = {2016},\n  pages = {1353-1357},\n  abstract = {The paper presents a method to acquire articulatory data from a sequence of MRI images at a high framerate. The acquisition rate is enhanced by partially collecting data in the kt-space. The combination of compressed sensing technique, along with homodyne reconstruction, enables the missing data to be recovered. The good reconstruction is guaranteed by an appropriate design of the sampling pattern. It is based on a pseudo-random Cartesian scheme, where each line is partially acquired for use of the homodyne reconstruction, and where the lines are pseudo-randomly sampled: central lines are constantly acquired and the sampling density decreases as the lines are far from the center. Application on real speech data show that the framework enables dynamic sequences of vocal tract images to be recovered at a framerate higher than 30 frames per second and with a spatial resolution of 1 mm. A method to extract articulatory data from contour identification is presented. It is intended, in fine, to be used for the creation of a large database of articulatory data.},\n  keywords = {biomedical MRI;compressed sensing;data acquisition;edge detection;image reconstruction;image sampling;image sequences;medical image processing;speech processing;speech production;articulatory data extraction;contour identification;dynamic vocal tract image sequences;sampling density;pseudo-random Cartesian scheme;sampling pattern;homodyne reconstruction;kt-space;MRI image sequence;articulatory data acquisition;compressed sensing technique;high spatiotemporal cineMRI films;Image reconstruction;Magnetic resonance imaging;Speech;Coils;Compressed sensing;Europe;Signal processing;Dynamic speech MRI;Compressed Sensing;Articulatory data},\n  doi = {10.1109/EUSIPCO.2016.7760469},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252171.pdf},\n}\n\n
\n
\n\n\n
\n The paper presents a method to acquire articulatory data from a sequence of MRI images at a high framerate. The acquisition rate is enhanced by partially collecting data in the kt-space. The combination of compressed sensing technique, along with homodyne reconstruction, enables the missing data to be recovered. The good reconstruction is guaranteed by an appropriate design of the sampling pattern. It is based on a pseudo-random Cartesian scheme, where each line is partially acquired for use of the homodyne reconstruction, and where the lines are pseudo-randomly sampled: central lines are constantly acquired and the sampling density decreases as the lines are far from the center. Application on real speech data show that the framework enables dynamic sequences of vocal tract images to be recovered at a framerate higher than 30 frames per second and with a spatial resolution of 1 mm. A method to extract articulatory data from contour identification is presented. It is intended, in fine, to be used for the creation of a large database of articulatory data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Semi-non-intrusive objective intelligibility measure using spatial filtering in hearing aids.\n \n \n \n \n\n\n \n Sørensen, C.; Boldt, J. B.; Gran, F.; and Christensen, M. G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1358-1362, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Semi-non-intrusivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760470,\n  author = {C. Sørensen and J. B. Boldt and F. Gran and M. G. Christensen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Semi-non-intrusive objective intelligibility measure using spatial filtering in hearing aids},\n  year = {2016},\n  pages = {1358-1362},\n  abstract = {Reliable non-intrusive online assessment of speech intelligibility can play a key role for the functioning of hearing aids, e.g. as guidance for adjusting the hearing aid settings to the environment. While existing intrusive metrics can provide a precise and reliable measure, the current non-intrusive metrics have not been able to achieve acceptable intelligibility predictions. This paper presents a new semi-non-intrusive intelligibility measure based on an existing intrusive measure, STOI, where an estimate of the clean speech is extracted using spatial filtering in the hearing aid. The results indicate that the STOI score obtained with the proposed method using an estimate of the clean speech correlates well with the STOI score having the original clean speech signal available.},\n  keywords = {hearing aids;medical signal processing;spatial filters;speech processing;seminonintrusive objective intelligibility measure;spatial filtering;reliable nonintrusive online assessment;speech intelligibility;hearing aid settings;nonintrusive metrics;intrusive measure;STOI score;original clean speech signal;Speech;Hearing aids;Microphones;Measurement;Speech enhancement;Reliability;Non-intrusive objective intelligibility measure;generalized sidelobe canceller;hearing aids},\n  doi = {10.1109/EUSIPCO.2016.7760470},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252211.pdf},\n}\n\n
\n
\n\n\n
\n Reliable non-intrusive online assessment of speech intelligibility can play a key role for the functioning of hearing aids, e.g. as guidance for adjusting the hearing aid settings to the environment. While existing intrusive metrics can provide a precise and reliable measure, the current non-intrusive metrics have not been able to achieve acceptable intelligibility predictions. This paper presents a new semi-non-intrusive intelligibility measure based on an existing intrusive measure, STOI, where an estimate of the clean speech is extracted using spatial filtering in the hearing aid. The results indicate that the STOI score obtained with the proposed method using an estimate of the clean speech correlates well with the STOI score having the original clean speech signal available.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the spatial degrees of freedom benefits of reverse TDD in multicell MIMO networks.\n \n \n \n \n\n\n \n Fanjul, J.; and Santamaria, I.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1363-1367, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760471,\n  author = {J. Fanjul and I. Santamaria},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On the spatial degrees of freedom benefits of reverse TDD in multicell MIMO networks},\n  year = {2016},\n  pages = {1363-1367},\n  abstract = {In this paper we study the degrees of freedom (DoF) achieved by interference alignment (IA) for cellular networks in reverse time division duplex (R-TDD) mode, a new configuration associated to heterogeneous networks. We derive a necessary feasibility condition for interference alignment in the multi-cell R-TDD scenario, which is then specialized to the particular case of symmetric demands and antenna distribution. We show that, for those symmetric networks for which the properness condition holds with equality, R-TDD does not improve the DoF performance of conventional synchronous TDD systems. Nevertheless, our simulation results indicate that, in more asymmetric scenarios, significant DoF benefits can be achieved by applying the R-TDD approach.},\n  keywords = {cellular radio;interference (signal);MIMO communication;time division multiplexing;spatial degrees of freedom;reverse TDD;multicell MIMO networks;degrees of freedom;interference alignment;cellular networks;reverse time division duplex;R-TDD;antenna distribution;DoF performance;synchronous TDD systems;Interference;Uplink;Antennas;Downlink;Cellular networks;Base stations;MIMO;Interference alignment;cellular networks;heterogeneous networks;reverse TDD;degrees of freedom},\n  doi = {10.1109/EUSIPCO.2016.7760471},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570247524.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we study the degrees of freedom (DoF) achieved by interference alignment (IA) for cellular networks in reverse time division duplex (R-TDD) mode, a new configuration associated to heterogeneous networks. We derive a necessary feasibility condition for interference alignment in the multi-cell R-TDD scenario, which is then specialized to the particular case of symmetric demands and antenna distribution. We show that, for those symmetric networks for which the properness condition holds with equality, R-TDD does not improve the DoF performance of conventional synchronous TDD systems. Nevertheless, our simulation results indicate that, in more asymmetric scenarios, significant DoF benefits can be achieved by applying the R-TDD approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Localization of a mobile rigid sensor network.\n \n \n \n\n\n \n Chen, S.; and Ho, K. C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1368-1372, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760472,\n  author = {S. Chen and K. C. Ho},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Localization of a mobile rigid sensor network},\n  year = {2016},\n  pages = {1368-1372},\n  abstract = {The relative positions of the sensors from one another in a rigid sensor network are known and locating the network reduces to obtaining its position, orientation angle, and translational and angular velocities with respect to a global coordinate frame from the measurements with anchors. Previous solution is computationally demanding that may not be suitable in a resource constrained environment. We propose a solution for this highly nonlinear estimation problem using the divide and conquer approach in the 2-D scenario. We first obtain from the measurements the sensor positions and velocities pretending no prior knowledge among them and then exploit their relative positions to estimate the unknown parameters. Methods are available for the first step. We focus on the second step and develop a closed-form solution through nuisance variables and nonlinear transformations. The proposed estimator is computationally attractive and has CRLB performance for Gaussian noise over the small error region.},\n  keywords = {Gaussian noise;mobile radio;nonlinear estimation;sensor placement;wireless sensor networks;mobile rigid sensor network;orientation angle;translational velocities;angular velocities;nonlinear estimation problem;sensor positions;nonlinear transformations;CRLB performance;Gaussian noise;Noise measurement;Position measurement;Closed-form solutions;Europe;Signal processing;Angular velocity;Estimation;Closed-form solution;localization;mobile rigid network;time and Doppler measurements},\n  doi = {10.1109/EUSIPCO.2016.7760472},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n The relative positions of the sensors from one another in a rigid sensor network are known and locating the network reduces to obtaining its position, orientation angle, and translational and angular velocities with respect to a global coordinate frame from the measurements with anchors. Previous solution is computationally demanding that may not be suitable in a resource constrained environment. We propose a solution for this highly nonlinear estimation problem using the divide and conquer approach in the 2-D scenario. We first obtain from the measurements the sensor positions and velocities pretending no prior knowledge among them and then exploit their relative positions to estimate the unknown parameters. Methods are available for the first step. We focus on the second step and develop a closed-form solution through nuisance variables and nonlinear transformations. The proposed estimator is computationally attractive and has CRLB performance for Gaussian noise over the small error region.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Closed-form solution for TDOA-based joint source and sensor localization in two-dimensional space.\n \n \n \n \n\n\n \n Le, T.; and Ono, N.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1373-1377, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Closed-formPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760473,\n  author = {T. Le and N. Ono},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Closed-form solution for TDOA-based joint source and sensor localization in two-dimensional space},\n  year = {2016},\n  pages = {1373-1377},\n  abstract = {In this paper, we propose a closed-form solution for time-difference-of-arrival (TDOA) based joint source and sensor localization in two-dimensional space (2D). This closed-form solution is a combination of two closed-form solutions for time-of-arrival information recovery and time-of-arrival (TOA)-based joint source and sensor localization in 2D. In our previous works, we derived closed-form solutions for TOA-based joint source and sensor localization and near-closed-form solutions for TOA information recovery in three-dimensional space (3D). Since the localization in 2D is simpler than that in 3D, closed-form solutions for both problems in 2D are derived in this paper. The root-mean-square errors (RMSEs) achieved by the proposed closed-form solution are compared with the Cramér-Rao lower bound (CRLB) in synthetic experiments. The results show that the proposed solution works well in both low-noise and noisy cases and with both small and large numbers of sources and sensors.},\n  keywords = {mean square error methods;sensor placement;time-of-arrival estimation;time difference of arrival;time of arrival information recovery;TOA;root mean square errors;RMSE;Cramér-Rao lower bound;CRLB;closed-form solution;time of arrival based joint source and sensor localization;TDOA based joint source and sensor localization;two-dimensional space;Two dimensional displays;Closed-form solutions;Manganese;Three-dimensional displays;Europe;Signal processing;Eigenvalues and eigenfunctions;Time Difference of Arrival;Time of Arrival;Joint Source and Sensor Localization},\n  doi = {10.1109/EUSIPCO.2016.7760473},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251911.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a closed-form solution for time-difference-of-arrival (TDOA) based joint source and sensor localization in two-dimensional space (2D). This closed-form solution is a combination of two closed-form solutions for time-of-arrival information recovery and time-of-arrival (TOA)-based joint source and sensor localization in 2D. In our previous works, we derived closed-form solutions for TOA-based joint source and sensor localization and near-closed-form solutions for TOA information recovery in three-dimensional space (3D). Since the localization in 2D is simpler than that in 3D, closed-form solutions for both problems in 2D are derived in this paper. The root-mean-square errors (RMSEs) achieved by the proposed closed-form solution are compared with the Cramér-Rao lower bound (CRLB) in synthetic experiments. The results show that the proposed solution works well in both low-noise and noisy cases and with both small and large numbers of sources and sensors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast unit-modulus least squares with applications in transmit beamforming.\n \n \n \n \n\n\n \n Tranter, J.; Sidiropoulos, N. D.; Fu, X.; and Swami, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1378-1382, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760474,\n  author = {J. Tranter and N. D. Sidiropoulos and X. Fu and A. Swami},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast unit-modulus least squares with applications in transmit beamforming},\n  year = {2016},\n  pages = {1378-1382},\n  abstract = {This paper considers the Unit-modulus Least Squares (ULS) problem, which is commonly seen in signal processing applications, e.g., phase-only beamforming, phase retrieval and radar code design. ULS formulations are easily reformulated as Unit-modulus Quadratic Programs (UQPs), to which Semi-Definite Relaxation (SDR) can be applied, and is often the state-of-the-art approach. SDR has the drawback of squaring the number of variables, which lifts the problem to much higher dimension and renders SDR ill-suited for large-scale ULS/UQP. In this work, we propose first-order algorithms that meet or exceed SDR performance in terms of (approximately) solving ULS problems, and also exhibit much more favorable runtime performance relative to SDR. We specialize to phase-only beamformer design, which entails additional degrees of freedom that we point out and exploit in two custom algorithms that build upon the general first-order algorithm for ULS/UQP. Simulations are used to showcase the effectiveness of the proposed algorithms.},\n  keywords = {array signal processing;least squares approximations;quadratic programming;unit-modulus least squares problem;ULS problem;phase retrieval;radar code design;unit-modulus quadratic programs;UQP;semidefinite relaxation;SDR;first-order algorithms;phase-only beamformer design;transmit beamforming;Signal processing algorithms;Array signal processing;Algorithm design and analysis;Cost function;Convergence},\n  doi = {10.1109/EUSIPCO.2016.7760474},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251990.pdf},\n}\n\n
\n
\n\n\n
\n This paper considers the Unit-modulus Least Squares (ULS) problem, which is commonly seen in signal processing applications, e.g., phase-only beamforming, phase retrieval and radar code design. ULS formulations are easily reformulated as Unit-modulus Quadratic Programs (UQPs), to which Semi-Definite Relaxation (SDR) can be applied, and is often the state-of-the-art approach. SDR has the drawback of squaring the number of variables, which lifts the problem to much higher dimension and renders SDR ill-suited for large-scale ULS/UQP. In this work, we propose first-order algorithms that meet or exceed SDR performance in terms of (approximately) solving ULS problems, and also exhibit much more favorable runtime performance relative to SDR. We specialize to phase-only beamformer design, which entails additional degrees of freedom that we point out and exploit in two custom algorithms that build upon the general first-order algorithm for ULS/UQP. Simulations are used to showcase the effectiveness of the proposed algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Massive overloaded MIMO signal detection via convex optimization with proximal splitting.\n \n \n \n \n\n\n \n Hayakawa, R.; Hayashi, K.; Sasahara, H.; and Nagahara, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1383-1387, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MassivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760475,\n  author = {R. Hayakawa and K. Hayashi and H. Sasahara and M. Nagahara},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Massive overloaded MIMO signal detection via convex optimization with proximal splitting},\n  year = {2016},\n  pages = {1383-1387},\n  abstract = {In this paper, we propose signal detection schemes for massive overloaded multiple-input multiple-output (MIMO) systems, where the number of receive antennas is less than that of transmitted streams. Using the idea of the sum-of-absolute-value (SOAV) optimization, we formulate the signal detection as a convex optimization problem, which can be solved via a fast algorithm based on Douglas-Rachford splitting. To improve the performance, we also propose an iterative approach to solve the optimization problem with weighting parameters update in a cost function. Simulation results show that the proposed scheme can achieve much better bit error rate (BER) performance than conventional schemes, especially in large-scale overloaded MIMO systems.},\n  keywords = {convex programming;error statistics;iterative methods;MIMO communication;signal detection;massive overloaded MIMO signal detection;convex optimization;proximal splitting;receive antennas;sum-of-absolute-value optimization;fast algorithm;Douglas-Rachford splitting;weighting parameters;bit error rate;MIMO;Optimization;Signal detection;Receiving antennas;Signal processing algorithms;Complexity theory;Iterative methods;massive MIMO;overloaded MIMO;proximal splitting methods;Douglas-Rachford algorithm;SOAV optimization},\n  doi = {10.1109/EUSIPCO.2016.7760475},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252007.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose signal detection schemes for massive overloaded multiple-input multiple-output (MIMO) systems, where the number of receive antennas is less than that of transmitted streams. Using the idea of the sum-of-absolute-value (SOAV) optimization, we formulate the signal detection as a convex optimization problem, which can be solved via a fast algorithm based on Douglas-Rachford splitting. To improve the performance, we also propose an iterative approach to solve the optimization problem with weighting parameters update in a cost function. Simulation results show that the proposed scheme can achieve much better bit error rate (BER) performance than conventional schemes, especially in large-scale overloaded MIMO systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Coordinated multicell beamforming with local and global data rate constraints.\n \n \n \n \n\n\n \n Elsabae, R.; Basutli, B.; Ghanem, K.; Gong, Y.; and Lambotharan, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1388-1392, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CoordinatedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760476,\n  author = {R. Elsabae and B. Basutli and K. Ghanem and Y. Gong and S. Lambotharan},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Coordinated multicell beamforming with local and global data rate constraints},\n  year = {2016},\n  pages = {1388-1392},\n  abstract = {We propose optimization techniques for coordinated multi-cell beamforming in the presence of local users and a global user. The local users are served by only the corresponding basestation (BS) while the global user is served by multiple basestations. The global user, with the aid of multiple antennas, is able to decode multiple data streams transmitted by various transmitters through singular value decomposition of the channels at the receiver and using left dominant singular vectors as the receiver beamforming. The coordinating basestations employ semidefinite programing based transmitter beamforming and agree to perform optimum data rate split for the global user in order to minimise the transmission power.},\n  keywords = {antenna arrays;antenna radiation patterns;array signal processing;cellular radio;mathematical programming;minimisation;mobile antennas;optimisation;radio receivers;radio transmitters;receiving antennas;singular value decomposition;telecommunication power management;transmitting antennas;coordinated multicell beamforming;local data rate constraint;global data rate constraint;optimization technique;global user;local user;multiple basestations;multiple antennas;channel singular value decomposition;left dominant singular vector;receiver beamforming;semidefinite programing-based transmitter beamforming;transmission power minimisation;Interference;Array signal processing;Signal to noise ratio;Antennas;Downlink;Europe;Beamforming;SVD;downlink;SINR target;multiantenna;quality of service},\n  doi = {10.1109/EUSIPCO.2016.7760476},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252090.pdf},\n}\n\n
\n
\n\n\n
\n We propose optimization techniques for coordinated multi-cell beamforming in the presence of local users and a global user. The local users are served by only the corresponding basestation (BS) while the global user is served by multiple basestations. The global user, with the aid of multiple antennas, is able to decode multiple data streams transmitted by various transmitters through singular value decomposition of the channels at the receiver and using left dominant singular vectors as the receiver beamforming. The coordinating basestations employ semidefinite programing based transmitter beamforming and agree to perform optimum data rate split for the global user in order to minimise the transmission power.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A separation principle for optimal IaaS cloud computing distribution.\n \n \n \n \n\n\n \n Kottmann, F.; Bolognani, S.; and Dörfler, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1393-1397, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760477,\n  author = {F. Kottmann and S. Bolognani and F. Dörfler},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A separation principle for optimal IaaS cloud computing distribution},\n  year = {2016},\n  pages = {1393-1397},\n  abstract = {Due to the raising importance of cloud computing and infrastructure as a service (IaaS), the first markets for the exchange of computational power over the internet are being implemented. As of today, bandwidth constraints are not explicitly embedded in these market mechanisms. In this paper, the problem of optimal allocation of the computing power and of the corresponding data flows, according to bandwidth and computing capacity constraints, is modeled as a bilevel optimization program. It is shown that this program, which is generally non convex and hard to solve, has the same optimal solution of its convex relaxation. This allows to state a fundamental separation result, showing how the congestion control protocols employed in the network do not affect the optimal allocation problem, and allows to compute the shadow prices of the available computational resources.},\n  keywords = {cloud computing;concave programming;resource allocation;separation principle;infrastructure as a service;IaaS;cloud computing distribution;optimal allocation problem;bilevel optimization program;nonconvex program;Bandwidth;Resource management;Optimization;Cloud computing;Computational modeling;Protocols;Cloud computing;network control;congestion control;bilevel optimization;IaaS},\n  doi = {10.1109/EUSIPCO.2016.7760477},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252241.pdf},\n}\n\n
\n
\n\n\n
\n Due to the raising importance of cloud computing and infrastructure as a service (IaaS), the first markets for the exchange of computational power over the internet are being implemented. As of today, bandwidth constraints are not explicitly embedded in these market mechanisms. In this paper, the problem of optimal allocation of the computing power and of the corresponding data flows, according to bandwidth and computing capacity constraints, is modeled as a bilevel optimization program. It is shown that this program, which is generally non convex and hard to solve, has the same optimal solution of its convex relaxation. This allows to state a fundamental separation result, showing how the congestion control protocols employed in the network do not affect the optimal allocation problem, and allows to compute the shadow prices of the available computational resources.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Two and three inputs widely linear FRESH receivers for cancellation of a quasi-rectilinear interference with frequency offset.\n \n \n \n \n\n\n \n Chauvat, R.; Delmas, J.; and Chevalier, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1398-1402, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"TwoPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760478,\n  author = {R. Chauvat and J. Delmas and P. Chevalier},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Two and three inputs widely linear FRESH receivers for cancellation of a quasi-rectilinear interference with frequency offset},\n  year = {2016},\n  pages = {1398-1402},\n  abstract = {Widely linear (WL) receivers can fulfill single antenna interference cancellation (SAIC) of one rectilinear (R) or quasi-rectilinear (QR) co-channel interference (CCI). The SAIC technology for QR signals has been shown to be less powerful than SAIC for R signals. To overcome this limitation, a SAIC/MAIC enhancement using three-inputs WL frequency-shift (FRESH) receiver has been introduced for QR signals. However, this receiver loses its efficiency for an interference having a residual frequency offset (FO) above a fraction of the baud rate. This may appear for airborne communications and it is the case for the inter-carrier interference of filter-bank based multicarrier waveforms using OQAM constellations which are candidate for 5G mobile networks. This paper extends the standard two-inputs SAIC/MAIC receiver and the three-inputs WL FRESH receiver for QR signals with FO. Analytical results and simulations are presented to study the impact of this FO on the performance of these receivers.},\n  keywords = {5G mobile communication;cochannel interference;interference suppression;quasi-rectilinear interference cancellation;frequency offset;widely linear FRESH receivers;single antenna interference cancellation;cochannel interference;SAIC;QR;CCI;SAIC/MAIC enhancement;residual frequency offset;intercarrier interference;5G mobile networks;Receivers;Modulation;Interference;Analytical models;Standards;Channel estimation;Correlation},\n  doi = {10.1109/EUSIPCO.2016.7760478},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252271.pdf},\n}\n\n
\n
\n\n\n
\n Widely linear (WL) receivers can fulfill single antenna interference cancellation (SAIC) of one rectilinear (R) or quasi-rectilinear (QR) co-channel interference (CCI). The SAIC technology for QR signals has been shown to be less powerful than SAIC for R signals. To overcome this limitation, a SAIC/MAIC enhancement using three-inputs WL frequency-shift (FRESH) receiver has been introduced for QR signals. However, this receiver loses its efficiency for an interference having a residual frequency offset (FO) above a fraction of the baud rate. This may appear for airborne communications and it is the case for the inter-carrier interference of filter-bank based multicarrier waveforms using OQAM constellations which are candidate for 5G mobile networks. This paper extends the standard two-inputs SAIC/MAIC receiver and the three-inputs WL FRESH receiver for QR signals with FO. Analytical results and simulations are presented to study the impact of this FO on the performance of these receivers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive completion of the correlation matrix in wireless sensor networks.\n \n \n \n \n\n\n \n Vlachos, E.; and Berberidis, K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1403-1407, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760479,\n  author = {E. Vlachos and K. Berberidis},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive completion of the correlation matrix in wireless sensor networks},\n  year = {2016},\n  pages = {1403-1407},\n  abstract = {The correlation structure among the sensor observations is a significant characteristic of the wireless sensor network (WSN) which can be exploited to drastically enhance the overall network performance. This structure is usually expressed as a low-rank approximation of the correlation matrix, although, in many cases the correlation of the captured data is full-rank. Thus, the computation of the full-rank correlation matrix by centralizing all the measurements into one node, puts at risk the privacy of the WSN. To overcome this problem, we impose privacy-preserving restrictions, in order to constrain the cooperation among the nodes, and hence promote the privacy. To this end, the decentralized estimation of the network-wide correlation matrix is obtained via a novel adaptive matrix completion technique, where at each step, a rank-one completion problem is solved. Through simulation experiments it has been verified that proposed algorithm converges to the full rank correlation matrix. Moreover, the proposed algorithm exhibits significantly lower computational complexity than the conventional technique.},\n  keywords = {approximation theory;computational complexity;data privacy;matrix algebra;wireless sensor networks;adaptive matrix completion technique;rank-one completion problem;computational complexity;network-wide correlation matrix decentralized estimation;privacy-preserving restriction;full-rank correlation matrix low-rank approximation;WSN;wireless sensor network;Correlation;Matrix decomposition;Wireless sensor networks;Symmetric matrices;Signal processing algorithms;Europe;Privacy},\n  doi = {10.1109/EUSIPCO.2016.7760479},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252301.pdf},\n}\n\n
\n
\n\n\n
\n The correlation structure among the sensor observations is a significant characteristic of the wireless sensor network (WSN) which can be exploited to drastically enhance the overall network performance. This structure is usually expressed as a low-rank approximation of the correlation matrix, although, in many cases the correlation of the captured data is full-rank. Thus, the computation of the full-rank correlation matrix by centralizing all the measurements into one node, puts at risk the privacy of the WSN. To overcome this problem, we impose privacy-preserving restrictions, in order to constrain the cooperation among the nodes, and hence promote the privacy. To this end, the decentralized estimation of the network-wide correlation matrix is obtained via a novel adaptive matrix completion technique, where at each step, a rank-one completion problem is solved. Through simulation experiments it has been verified that proposed algorithm converges to the full rank correlation matrix. Moreover, the proposed algorithm exhibits significantly lower computational complexity than the conventional technique.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Enhancement of incipient fault detection and estimation using the multivariate Kullback-Leibler Divergence.\n \n \n \n \n\n\n \n Youssef, A.; Delpha, C.; and Diallo, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1408-1412, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EnhancementPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760480,\n  author = {A. Youssef and C. Delpha and D. Diallo},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Enhancement of incipient fault detection and estimation using the multivariate Kullback-Leibler Divergence},\n  year = {2016},\n  pages = {1408-1412},\n  abstract = {Fault detection and diagnosis methods have to deal with large variable data sets encountered in complex industrial systems. Solutions to this problem require multivariate statistics approaches often focused on the reduction of the space dimension. In this paper we propose a fault detection and estimation approach using Multivariate Kullback-Leibler Divergence (MKLD) to cope with the negative effects due dimension reduction while using Principal Component Analysis (PCA). The obtained results show its superiority on the usual PCA-KLD based approach. An analytical model of the MKLD is proposed and validated for low severity fault (incipient fault) detection and estimation in noisy environment operating conditions.},\n  keywords = {fault diagnosis;principal component analysis;multivariate Kullback-Leibler divergence;incipient fault detection;fault estimation;fault diagnosis;multivariate statistics;principal component analysis;PCA-KLD;Principal component analysis;Fault detection;Estimation;Monitoring;Probability distribution;Signal to noise ratio;Europe;Incipient fault diagnosis;Detection;Estimation;Kullback-Leibler Divergence;Multivariate analysis},\n  doi = {10.1109/EUSIPCO.2016.7760480},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251974.pdf},\n}\n\n
\n
\n\n\n
\n Fault detection and diagnosis methods have to deal with large variable data sets encountered in complex industrial systems. Solutions to this problem require multivariate statistics approaches often focused on the reduction of the space dimension. In this paper we propose a fault detection and estimation approach using Multivariate Kullback-Leibler Divergence (MKLD) to cope with the negative effects due dimension reduction while using Principal Component Analysis (PCA). The obtained results show its superiority on the usual PCA-KLD based approach. An analytical model of the MKLD is proposed and validated for low severity fault (incipient fault) detection and estimation in noisy environment operating conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection of modern communication signals using frequency domain morphological filtering.\n \n \n \n \n\n\n \n Witschi, M.; Schild, J.; Nyffenegger, B.; Stoller, C.; Berger, M.; Vetter, R.; Stirnimann, G.; Schwab, P.; and Dellsperger, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1413-1417, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DetectionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760481,\n  author = {M. Witschi and J. Schild and B. Nyffenegger and C. Stoller and M. Berger and R. Vetter and G. Stirnimann and P. Schwab and F. Dellsperger},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Detection of modern communication signals using frequency domain morphological filtering},\n  year = {2016},\n  pages = {1413-1417},\n  abstract = {Safeguarding sensible areas and buildings from unauthorized use of wireless communication or drone intrusion requires a reliable and robust detection of modern communication signals. We present an algorithm for detecting UMTS, LTE and current drone communication signals in adverse environments. Morphological filtering in frequency domain is used to separate the desired signal from perturbations, thus allowing reliable detection. In a first step, the algorithm was validated on synthetic signals with narrowband single carrier perturbations and promising results were obtained. Long term field tests conducted in a prison in Germany revealed high performance in terms of low false alarm rates (0-0.8%) and high sensitivity (98.2-100%).},\n  keywords = {filters;signal detection;modern communication signals;frequency domain morphological filtering;signal detection;adverse environments;narrowband single carrier perturbations;Bandwidth;Drones;3G mobile communication;Filtering;Long Term Evolution;Signal processing algorithms;Time-frequency analysis;Morphological Filtering;LTE;UMTS;Signal Detection},\n  doi = {10.1109/EUSIPCO.2016.7760481},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252140.pdf},\n}\n\n
\n
\n\n\n
\n Safeguarding sensible areas and buildings from unauthorized use of wireless communication or drone intrusion requires a reliable and robust detection of modern communication signals. We present an algorithm for detecting UMTS, LTE and current drone communication signals in adverse environments. Morphological filtering in frequency domain is used to separate the desired signal from perturbations, thus allowing reliable detection. In a first step, the algorithm was validated on synthetic signals with narrowband single carrier perturbations and promising results were obtained. Long term field tests conducted in a prison in Germany revealed high performance in terms of low false alarm rates (0-0.8%) and high sensitivity (98.2-100%).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Real-time incident detection: An approach for two interdependent time series.\n \n \n \n \n\n\n \n Vafeiadis, T.; Ioannidis, D.; Krinidis, S.; Ziogou, C.; Voutetakis, S.; Tzovaras, D.; and Likothanassis, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1418-1422, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Real-timePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760482,\n  author = {T. Vafeiadis and D. Ioannidis and S. Krinidis and C. Ziogou and S. Voutetakis and D. Tzovaras and S. Likothanassis},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Real-time incident detection: An approach for two interdependent time series},\n  year = {2016},\n  pages = {1418-1422},\n  abstract = {A method is proposed to detect incidents that occur in two interdependent time series in real-time, estimating the incident time point from the profiles of the linear trend test statistics, computed on consecutive overlapping data window. The method is based on Slope Statistics Profile (SSP) utilizing adaptive data windowing, estimating real-time classifications of the linear trend profiles, according to two different linear trend scenarios, suitably adapted to the conditions of the problem. The method is applied on real datasets from a chemical process system that is situated at the premises of CERTH / CPERI, suggesting the occurrence of incidents, during experiments.},\n  keywords = {signal classification;signal detection;statistical analysis;time series;real-time incident detection;interdependent time series;linear trend test statistics;overlapping data window;slope statistics profile;SSP;adaptive data windowing;real-time classification;linear trend profiles;CERTH;CPERI;chemical process system;Market research;Time series analysis;Temperature measurement;Real-time systems;Estimation;Resistance heating;Time series;structural change;incident detection;linear trend},\n  doi = {10.1109/EUSIPCO.2016.7760482},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252172.pdf},\n}\n\n
\n
\n\n\n
\n A method is proposed to detect incidents that occur in two interdependent time series in real-time, estimating the incident time point from the profiles of the linear trend test statistics, computed on consecutive overlapping data window. The method is based on Slope Statistics Profile (SSP) utilizing adaptive data windowing, estimating real-time classifications of the linear trend profiles, according to two different linear trend scenarios, suitably adapted to the conditions of the problem. The method is applied on real datasets from a chemical process system that is situated at the premises of CERTH / CPERI, suggesting the occurrence of incidents, during experiments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Demand response for renewable energy integration and load balancing in smart grid communities.\n \n \n \n \n\n\n \n Chiş, A.; Rajasekharan, J.; Lundén, J.; and Koivunen, V.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1423-1427, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DemandPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760483,\n  author = {A. Chiş and J. Rajasekharan and J. Lundén and V. Koivunen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Demand response for renewable energy integration and load balancing in smart grid communities},\n  year = {2016},\n  pages = {1423-1427},\n  abstract = {This paper proposes a demand response strategy for energy management within a smart grid community of residential households. Some of the households own renewable energy systems and energy storage systems (ESS) and sell the excess renewable energy to the residences that need electrical energy. The proposed strategy comprises methods that provide benefits for the residential electricity users and for the load aggregator. Specifically, we propose an off-line algorithm that schedules the renewable resources integration by trading energy between the renewable energy producers and buyers. Moreover, we propose a geometric programming based optimization method that uses the ESS for balancing the community's power grid load and for reducing the grid consumption cost. Simulations show that the proposed method may lead to a community's grid consumption cost reduction of 10.5%. It may also achieve balanced load profiles with peak to average ratio (PAR) close to unity, the average PAR reduction being 52%.},\n  keywords = {cost reduction;demand side management;energy management systems;geometric programming;power consumption;power system economics;renewable energy sources;smart power grids;PAR reduction;balanced load profiles;peak to average ratio;grid consumption cost reduction;power grid load;ESS;optimization method;geometric programming;renewable resources;off-line algorithm;load aggregator;residential electricity users;electrical energy;energy storage systems;residential households;energy management;demand response strategy;smart grid communities;load balancing;renewable energy integration;Renewable energy sources;Load management;Signal processing algorithms;Smart grids;Peak to average power ratio;Optimization;smart grids;optimization;demand response;renewable energy;load balancing;geometric programming},\n  doi = {10.1109/EUSIPCO.2016.7760483},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252213.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a demand response strategy for energy management within a smart grid community of residential households. Some of the households own renewable energy systems and energy storage systems (ESS) and sell the excess renewable energy to the residences that need electrical energy. The proposed strategy comprises methods that provide benefits for the residential electricity users and for the load aggregator. Specifically, we propose an off-line algorithm that schedules the renewable resources integration by trading energy between the renewable energy producers and buyers. Moreover, we propose a geometric programming based optimization method that uses the ESS for balancing the community's power grid load and for reducing the grid consumption cost. Simulations show that the proposed method may lead to a community's grid consumption cost reduction of 10.5%. It may also achieve balanced load profiles with peak to average ratio (PAR) close to unity, the average PAR reduction being 52%.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Power spectral density limitations of the wavelet-OFDM system.\n \n \n \n \n\n\n \n Chafii, M.; Palicot, J.; Gribonval, R.; and Burr, A. G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1428-1432, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"PowerPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760484,\n  author = {M. Chafii and J. Palicot and R. Gribonval and A. G. Burr},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Power spectral density limitations of the wavelet-OFDM system},\n  year = {2016},\n  pages = {1428-1432},\n  abstract = {Wavelet-OFDM based on the discrete wavelet transform is a multicarrier modulation technique of considerable interest, due to its good performance in several respects such as the peak-to-average power ratio and the interference cancellation, as investigated in the literature. More specifically, the Haar wavelet has been proposed by various researchers as the most attractive wavelet for data transmission. In this paper, we address the power spectral density limitations of Wavelet-OFDM, and we show analytically and experimentally that the bandwidth efficiency of Haar Wavelet-OFDM is significantly poorer than OFDM, having larger main lobe and side lobes compared with OFDM, which reduces the attractiveness of the scheme.},\n  keywords = {discrete wavelet transforms;Haar transforms;OFDM modulation;radiocommunication;power spectral density limitation;Haar wavelet-OFDM system;multicarrier modulation technique;discrete wavelet transform;data transmission;OFDM;Modulation;Bandwidth;Frequency-domain analysis;Discrete wavelet transforms;Wavelet domain;Wavelet-OFDM;Orthogonal Frequency Division Multiplexing (OFDM);Haar wavelet;Discrete Wavelet Transform (DWT);Power Spectral Density (PSD)},\n  doi = {10.1109/EUSIPCO.2016.7760484},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252216.pdf},\n}\n\n
\n
\n\n\n
\n Wavelet-OFDM based on the discrete wavelet transform is a multicarrier modulation technique of considerable interest, due to its good performance in several respects such as the peak-to-average power ratio and the interference cancellation, as investigated in the literature. More specifically, the Haar wavelet has been proposed by various researchers as the most attractive wavelet for data transmission. In this paper, we address the power spectral density limitations of Wavelet-OFDM, and we show analytically and experimentally that the bandwidth efficiency of Haar Wavelet-OFDM is significantly poorer than OFDM, having larger main lobe and side lobes compared with OFDM, which reduces the attractiveness of the scheme.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Signal processing techniques for on-line partial discharge detection and classification.\n \n \n \n \n\n\n \n Aranburu, A.; Olaizola, J.; Barrenechea, M.; Mulroy, P.; Hurtado, A.; and Gilbert, I.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1433-1437, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SignalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760485,\n  author = {A. Aranburu and J. Olaizola and M. Barrenechea and P. Mulroy and A. Hurtado and I. Gilbert},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Signal processing techniques for on-line partial discharge detection and classification},\n  year = {2016},\n  pages = {1433-1437},\n  abstract = {Partial discharge (PD) detection plays a fundamental role in monitoring the health of medium voltage (MV) systems. This paper presents a method for PD detection and source recognition in MV sub-stations based on a combination of signal processing techniques. Firstly, PD detection and signal conditioning is carried out. Then, PDs of different sources are separated and finally classified by means of the extension set theory. The obtained results show a classification effectiveness of 100% on single source PDs and an effectiveness of 72.5% in multisource PDs, where PDs from many sources are captured in the same data set.},\n  keywords = {condition monitoring;partial discharges;power engineering computing;set theory;signal classification;source separation;substations;online partial discharge detection;online partial discharge classification;signal processing technique;PD detection;medium voltage system health monitoring;MV system health monitoring;source recognition;MV substation;signal conditioning;source separation;extension set theory;Partial discharges;Energy states;Oscillators;Signal processing;Discrete wavelet transforms;Europe;partial discharge;PD;pattern recognition;classification;extension set theory},\n  doi = {10.1109/EUSIPCO.2016.7760485},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252253.pdf},\n}\n\n
\n
\n\n\n
\n Partial discharge (PD) detection plays a fundamental role in monitoring the health of medium voltage (MV) systems. This paper presents a method for PD detection and source recognition in MV sub-stations based on a combination of signal processing techniques. Firstly, PD detection and signal conditioning is carried out. Then, PDs of different sources are separated and finally classified by means of the extension set theory. The obtained results show a classification effectiveness of 100% on single source PDs and an effectiveness of 72.5% in multisource PDs, where PDs from many sources are captured in the same data set.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Heal-T: An efficient PPG-based heart-rate and IBI estimation method during physical exercise.\n \n \n \n \n\n\n \n Torres, J. M. M.; Ghosh, A.; Stepanov, E. A.; and Riccardi, G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1438-1442, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Heal-T:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760486,\n  author = {J. M. M. Torres and A. Ghosh and E. A. Stepanov and G. Riccardi},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Heal-T: An efficient PPG-based heart-rate and IBI estimation method during physical exercise},\n  year = {2016},\n  pages = {1438-1442},\n  abstract = {Photoplethysmography (PPG) is a simple, unobtrusive and low-cost technique for measuring blood volume pulse (BVP) used in heart-rate (HR) estimation. However, PPG based heart-rate monitoring devices are often affected by motion artifacts in on-the-go scenarios, and can yield a noisy BVP signal reporting erroneous HR values. Recent studies have proposed spectral decomposition techniques (e.g. M-FOCUSS, Joint-Sparse-Spectrum) to reduce motion artifacts and increase HR estimation accuracy, but at the cost of high computational load. The singular-value-decomposition and recursive calculations present in these approaches are not feasible for the implementation in real-time continuous-monitoring scenarios. In this paper, we propose an efficient HR estimation method based on a combination of fast-ICA, RLS and BHW filter stages that avoids sparse signal reconstruction, while maintaining a high HR estimation accuracy. The proposed method outperforms the state-of-the-art systems on the publicly available TROIKA data set both in terms of HR estimation accuracy (absolute error of 2.25 ± 1.93 bpm) and computational load.},\n  keywords = {independent component analysis;least squares approximations;photoplethysmography;recursive filters;singular value decomposition;sparse signal reconstruction avoidance;BHW filter;RLS;fast-ICA;HR estimation method;real-time continuous-monitoring scenarios;recursive calculations;singular value decomposition;motion artifact reduction;spectral decomposition technique;on-the-go scenario;PPG based heart-rate monitoring device;BVP measurement;blood volume pulse measurement;photoplethysmography;physical exercise;PPG-based heart-rate estimation method;IBI estimation method;HEAL-T;Heart rate;Estimation;Accelerometers;Finite impulse response filters;Bandwidth},\n  doi = {10.1109/EUSIPCO.2016.7760486},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252279.pdf},\n}\n\n
\n
\n\n\n
\n Photoplethysmography (PPG) is a simple, unobtrusive and low-cost technique for measuring blood volume pulse (BVP) used in heart-rate (HR) estimation. However, PPG based heart-rate monitoring devices are often affected by motion artifacts in on-the-go scenarios, and can yield a noisy BVP signal reporting erroneous HR values. Recent studies have proposed spectral decomposition techniques (e.g. M-FOCUSS, Joint-Sparse-Spectrum) to reduce motion artifacts and increase HR estimation accuracy, but at the cost of high computational load. The singular-value-decomposition and recursive calculations present in these approaches are not feasible for the implementation in real-time continuous-monitoring scenarios. In this paper, we propose an efficient HR estimation method based on a combination of fast-ICA, RLS and BHW filter stages that avoids sparse signal reconstruction, while maintaining a high HR estimation accuracy. The proposed method outperforms the state-of-the-art systems on the publicly available TROIKA data set both in terms of HR estimation accuracy (absolute error of 2.25 ± 1.93 bpm) and computational load.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Features selection for analyzing the effect of preparation instruction on forearm muscles during pre-motor activity.\n \n \n \n \n\n\n \n Saidane, Y.; and Ben Jebara, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1443-1447, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"FeaturesPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760487,\n  author = {Y. Saidane and S. {Ben Jebara}},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Features selection for analyzing the effect of preparation instruction on forearm muscles during pre-motor activity},\n  year = {2016},\n  pages = {1443-1447},\n  abstract = {This paper studies the effect of preparation instruction on pre-motor activity in EMG signals. Two kinds of trials are investigated, the first one uses a warning signal for mental preparation of a contraction while the second one does not use any preparation warning. Time domain analysis has been carried out in order to select Relative Power (RP) and Preparation Duration (PD) as relevant features that characterize the pre-motor stage. Two modes of preparation are defined: small and large. The small one is characterized by short preparation with relatively low power. Large preparation is the opposite case. Statistical analysis for men and women subjects, using MANOVA tests are performed. It shows a diversity of behaviors and discrimination abilities according to the muscle type, to the preparation type and to the gender.},\n  keywords = {electromyography;feature selection;muscle;signal processing;statistical analysis;forearm muscle;feature selection;preparation instruction effect;pre-motor activity;EMG signal;warning signal;mental preparation;contraction;time domain analysis;statistical analysis;MANOVA test;Muscles;Electromyography;Signal processing;Europe;Time-domain analysis;Analysis of variance;Electrodes;EMG signal;preparation instruction;Preparation Duration (PD);Relative Power (RP);small/large preparation;trials discrimination},\n  doi = {10.1109/EUSIPCO.2016.7760487},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252299.pdf},\n}\n\n
\n
\n\n\n
\n This paper studies the effect of preparation instruction on pre-motor activity in EMG signals. Two kinds of trials are investigated, the first one uses a warning signal for mental preparation of a contraction while the second one does not use any preparation warning. Time domain analysis has been carried out in order to select Relative Power (RP) and Preparation Duration (PD) as relevant features that characterize the pre-motor stage. Two modes of preparation are defined: small and large. The small one is characterized by short preparation with relatively low power. Large preparation is the opposite case. Statistical analysis for men and women subjects, using MANOVA tests are performed. It shows a diversity of behaviors and discrimination abilities according to the muscle type, to the preparation type and to the gender.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A randomised primal-dual algorithm for distributed radio-interferometric imaging.\n \n \n \n \n\n\n \n Onose, A.; Carrillo, R. E.; McEwen, J. D.; and Wiaux, Y.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1448-1452, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760488,\n  author = {A. Onose and R. E. Carrillo and J. D. McEwen and Y. Wiaux},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A randomised primal-dual algorithm for distributed radio-interferometric imaging},\n  year = {2016},\n  pages = {1448-1452},\n  abstract = {Next generation radio telescopes, like the Square Kilometre Array, will acquire an unprecedented amount of data for radio astronomy. The development of fast, parallelisable or distributed algorithms for handling such large-scale data sets is of prime importance. Motivated by this, we investigate herein a convex optimisation algorithmic structure, based on primal-dual forward-backward iterations, for solving the radio interferometric imaging problem. It can encompass any convex prior of interest. It allows for the distributed processing of the measured data and introduces further flexibility by employing a probabilistic approach for the selection of the data blocks used at a given iteration. We study the reconstruction performance with respect to the data distribution and we propose the use of nonuniform probabilities for the randomised updates. Our simulations show the feasibility of the randomisation given a limited computing infrastructure as well as important computational advantages when compared to state-of-the-art algorithmic structures.},\n  keywords = {image processing;iterative methods;optimisation;distributed radio-interferometric imaging;randomised primal-dual algorithm;next generation radio telescopes;square kilometre array;radio astronomy;convex optimisation algorithmic structure;primal-dual forward-backward iterations;distributed processing;probabilistic approach;reconstruction performance;nonuniform probability;limited computing infrastructure;Signal processing algorithms;Image reconstruction;Imaging;Minimization;Distributed databases;Signal processing;Optimization;primal-dual algorithm;image processing;radio interferometry},\n  doi = {10.1109/EUSIPCO.2016.7760488},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252337.pdf},\n}\n\n
\n
\n\n\n
\n Next generation radio telescopes, like the Square Kilometre Array, will acquire an unprecedented amount of data for radio astronomy. The development of fast, parallelisable or distributed algorithms for handling such large-scale data sets is of prime importance. Motivated by this, we investigate herein a convex optimisation algorithmic structure, based on primal-dual forward-backward iterations, for solving the radio interferometric imaging problem. It can encompass any convex prior of interest. It allows for the distributed processing of the measured data and introduces further flexibility by employing a probabilistic approach for the selection of the data blocks used at a given iteration. We study the reconstruction performance with respect to the data distribution and we propose the use of nonuniform probabilities for the randomised updates. Our simulations show the feasibility of the randomisation given a limited computing infrastructure as well as important computational advantages when compared to state-of-the-art algorithmic structures.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reverberation time estimation based on a model for the power spectral density of reverberant speech.\n \n \n \n \n\n\n \n Faraji, N.; Ahadi, S. M.; and Sheikhzadeh, H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1453-1457, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ReverberationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760489,\n  author = {N. Faraji and S. M. Ahadi and H. Sheikhzadeh},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Reverberation time estimation based on a model for the power spectral density of reverberant speech},\n  year = {2016},\n  pages = {1453-1457},\n  abstract = {In this paper, it is shown that the Power Spectral Density (PSD) of late reverberant speech can be described by a first-order Infinite Impulse Response (IIR) model with the pole related to the reverberation time. Utilizing this first-order IIR model, an online method for reverberation time estimation (RTE) from a recorded reverberant signal is proposed. The proposed method takes advantage of processing in subband domain in order to reliably estimate the reverberation time in noisy environments. Comparing with a well-known maximum likelihood approach for RTE, the superior performance of the new approach for fast tracking of RT with higher accuracy is demonstrated.},\n  keywords = {maximum likelihood estimation;reverberation;speech processing;transient response;reverberation time estimation;power spectral density;reverberant speech;first-order infinite impulse response model;RTE;maximum likelihood approach;Europe;Signal processing;Conferences},\n  doi = {10.1109/EUSIPCO.2016.7760489},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256574.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, it is shown that the Power Spectral Density (PSD) of late reverberant speech can be described by a first-order Infinite Impulse Response (IIR) model with the pole related to the reverberation time. Utilizing this first-order IIR model, an online method for reverberation time estimation (RTE) from a recorded reverberant signal is proposed. The proposed method takes advantage of processing in subband domain in order to reliably estimate the reverberation time in noisy environments. Comparing with a well-known maximum likelihood approach for RTE, the superior performance of the new approach for fast tracking of RT with higher accuracy is demonstrated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reflector localization based on multiple reflection points.\n \n \n \n \n\n\n \n El Baba, Y.; Walther, A.; and Habets, E. A. P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1458-1462, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ReflectorPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760490,\n  author = {Y. {El Baba} and A. Walther and E. A. P. Habets},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Reflector localization based on multiple reflection points},\n  year = {2016},\n  pages = {1458-1462},\n  abstract = {Reflector localization has been the subject of growing research interest in recent years. This paper outlines an approach that performs reflector localization based on loudspeaker and microphone positions and their images. The positions of the latter are computed using pre-grouped sets of times of arrival (TOAs) estimated from room impulse responses. First, the TOA sets are used to estimate the microphone positions. Second, these are used with knowledge of the array geometry to determine the locations of reflection points on the available reflectors. Finally, the reflection points are used to obtain the reflector locations. It is shown that the proposed approach facilitates solving the reflector localization problem in ill-conditioned setups.},\n  keywords = {acoustic wave reflection;array signal processing;loudspeakers;microphone arrays;time-of-arrival estimation;multiple reflection points;loudspeaker position;microphone position;times of arrival estimation;TOA estimation;room impulse response;reflector localization problem;Loudspeakers;Microphone arrays;Geometry;Two dimensional displays;Europe;Signal processing;Image model;reflection point localization;reflector localization;room geometry inference},\n  doi = {10.1109/EUSIPCO.2016.7760490},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255745.pdf},\n}\n\n
\n
\n\n\n
\n Reflector localization has been the subject of growing research interest in recent years. This paper outlines an approach that performs reflector localization based on loudspeaker and microphone positions and their images. The positions of the latter are computed using pre-grouped sets of times of arrival (TOAs) estimated from room impulse responses. First, the TOA sets are used to estimate the microphone positions. Second, these are used with knowledge of the array geometry to determine the locations of reflection points on the available reflectors. Finally, the reflection points are used to obtain the reflector locations. It is shown that the proposed approach facilitates solving the reflector localization problem in ill-conditioned setups.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Binaural source localization using a HRTF data model with enhanced frequency diversity.\n \n \n \n \n\n\n \n Reddy, C. S.; Agarwal, R.; Aggarwal, L.; and Hegde, R. M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1463-1467, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"BinauralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760491,\n  author = {C. S. Reddy and R. Agarwal and L. Aggarwal and R. M. Hegde},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Binaural source localization using a HRTF data model with enhanced frequency diversity},\n  year = {2016},\n  pages = {1463-1467},\n  abstract = {A novel method for binaural source localization using a HRTF data model with enhanced frequency diversity is proposed in this work. The method proposed herein is developed in the frequency domain and enhances the directional information in the sound signal for efficient broad band sound source localization even under conditions of low SNR. The directional information is enhanced by cancelling the spectrum of the direct sound source. Subsequently the sound source is localized using a sub space based method. Binaural source localization experiments are performed using HRIR from the CIPIC database and the HRTFs recorded from a customized mannequin developed using 3D printing technology. Experimental results on sound source localization indicate a reasonable improvement in terms of RMSE when compared to state of the art methods.},\n  keywords = {direction-of-arrival estimation;signal processing;binaural source localization;HRTF data model;enhanced frequency diversity;sound signal;directional information;direct sound source;3D printing technology;sound source localization;Mathematical model;Data models;Frequency diversity;Sensors;Databases;Ear;Bandwidth},\n  doi = {10.1109/EUSIPCO.2016.7760491},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256234.pdf},\n}\n\n
\n
\n\n\n
\n A novel method for binaural source localization using a HRTF data model with enhanced frequency diversity is proposed in this work. The method proposed herein is developed in the frequency domain and enhances the directional information in the sound signal for efficient broad band sound source localization even under conditions of low SNR. The directional information is enhanced by cancelling the spectrum of the direct sound source. Subsequently the sound source is localized using a sub space based method. Binaural source localization experiments are performed using HRIR from the CIPIC database and the HRTFs recorded from a customized mannequin developed using 3D printing technology. Experimental results on sound source localization indicate a reasonable improvement in terms of RMSE when compared to state of the art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improving narrowband DOA estimation of sound sources using the complex Watson distribution.\n \n \n \n \n\n\n \n Alexandridis, A.; and Mouchtaris, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1468-1472, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760492,\n  author = {A. Alexandridis and A. Mouchtaris},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Improving narrowband DOA estimation of sound sources using the complex Watson distribution},\n  year = {2016},\n  pages = {1468-1472},\n  abstract = {Narrowband direction-of-arrival (DOA) estimates for each time-frequency (TF) point offer a parametric spatial modeling of the acoustic environment which is very commonly used in many applications, such as source separation, dereverberation, and spatial audio. However, irrespective of the narrowband DOA estimation method used, many TF-points suffer from erroneous estimates due to noise and reverberation. We propose a novel technique to yield more accurate DOA estimates in the TF-domain, through statistical modeling of each TF-point with a complex Watson distribution. Then, instead of using the microphone array signals at a given TF-point to estimate the DOA, the maximum likelihood estimate of the mode vector of the distribution is used as input to the DOA estimation method. This approach results in more accurate DOA estimates and thus more accurate modeling of the acoustic environment, while it can be used with any narrowband DOA estimation method and microphone array geometry.},\n  keywords = {acoustic radiators;acoustic signal processing;array signal processing;direction-of-arrival estimation;maximum likelihood estimation;microphone arrays;source separation;statistical distributions;time-frequency analysis;sound source estimation;complex Watson distribution;direction-of-arrival estimates;time-frequency point;parametric spatial modeling;acoustic environment;narrowband DOA estimation method;TF-points;statistical modeling;microphone array signals;maximum likelihood estimation;mode vector;microphone array geometry;Direction-of-arrival estimation;Narrowband;Maximum likelihood estimation;Time-frequency analysis;Microphone arrays},\n  doi = {10.1109/EUSIPCO.2016.7760492},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256262.pdf},\n}\n\n
\n
\n\n\n
\n Narrowband direction-of-arrival (DOA) estimates for each time-frequency (TF) point offer a parametric spatial modeling of the acoustic environment which is very commonly used in many applications, such as source separation, dereverberation, and spatial audio. However, irrespective of the narrowband DOA estimation method used, many TF-points suffer from erroneous estimates due to noise and reverberation. We propose a novel technique to yield more accurate DOA estimates in the TF-domain, through statistical modeling of each TF-point with a complex Watson distribution. Then, instead of using the microphone array signals at a given TF-point to estimate the DOA, the maximum likelihood estimate of the mode vector of the distribution is used as input to the DOA estimation method. This approach results in more accurate DOA estimates and thus more accurate modeling of the acoustic environment, while it can be used with any narrowband DOA estimation method and microphone array geometry.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 3D localization of multiple audio sources utilizing 2D DOA histograms.\n \n \n \n \n\n\n \n Delikaris-Manias, S.; Pavlidi, D.; Pulkki, V.; and Mouchtaris, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1473-1477, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"3DPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760493,\n  author = {S. Delikaris-Manias and D. Pavlidi and V. Pulkki and A. Mouchtaris},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {3D localization of multiple audio sources utilizing 2D DOA histograms},\n  year = {2016},\n  pages = {1473-1477},\n  abstract = {Steered response power (SRP) techniques have been well appreciated for their robustness and accuracy in estimating the direction of arrival (DOA) when a single source is active. However, by increasing the number of sources, the complexity of the resulting power map increases, making it challenging to localize the separate sources. In this work, we propose an efficient 2D histogram processing approach which is applied on the local DOA estimates, provided by SRP, and reveals the DOA of multiple audio sources in an iterative fashion. Driven by the results, we also apply the same methodology to local DOA estimates of a known subspace method and improve its accuracy. The performance of the presented algorithms is validated with numerical simulations and real measurements with a rigid spherical microphone array in different acoustical conditions: for multiple audio sources with different angular separations, various reverberation and signal-to-noise ratio (SNR) values.},\n  keywords = {direction-of-arrival estimation;iterative methods;microphone arrays;source separation;steered response power techniques;SRP techniques;direction of arrival;power map;2D histogram processing approach;local DOA estimates;multiple audio sources;known subspace method;rigid spherical microphone array;acoustical conditions;angular separations;signal-to-noise ratio values;SNR values;2D DOA histograms;Direction-of-arrival estimation;Histograms;Two dimensional displays;Multiple signal classification;Harmonic analysis;Microphone arrays},\n  doi = {10.1109/EUSIPCO.2016.7760493},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256313.pdf},\n}\n\n
\n
\n\n\n
\n Steered response power (SRP) techniques have been well appreciated for their robustness and accuracy in estimating the direction of arrival (DOA) when a single source is active. However, by increasing the number of sources, the complexity of the resulting power map increases, making it challenging to localize the separate sources. In this work, we propose an efficient 2D histogram processing approach which is applied on the local DOA estimates, provided by SRP, and reveals the DOA of multiple audio sources in an iterative fashion. Driven by the results, we also apply the same methodology to local DOA estimates of a known subspace method and improve its accuracy. The performance of the presented algorithms is validated with numerical simulations and real measurements with a rigid spherical microphone array in different acoustical conditions: for multiple audio sources with different angular separations, various reverberation and signal-to-noise ratio (SNR) values.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Autoregressive moving average modeling of late reverberation in the frequency domain.\n \n \n \n \n\n\n \n Leglaive, S.; Badeau, R.; and Richard, G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1478-1482, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AutoregressivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760494,\n  author = {S. Leglaive and R. Badeau and G. Richard},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Autoregressive moving average modeling of late reverberation in the frequency domain},\n  year = {2016},\n  pages = {1478-1482},\n  abstract = {In this paper, the late part of a room response is modeled in the frequency domain as a complex Gaussian random process. The autocovariance function (ACVF) and power spectral density (PSD) are theoretically defined from the exponential decay of the late reverberation power. Furthermore we show that the ACVF and PSD are accurately parametrized by an autoregressive moving average (ARMA) model. This leads to a new generative model of late reverberation in the frequency domain. The ARMA parameters are easily estimated from the theoretical ACVF. The statistical characterization is consistent with empirical results on simulated and real data. This model could be used to incorporate priors in audio source separation and dereverberation.},\n  keywords = {Gaussian processes;reverberation;autoregressive moving average modeling;late reverberation;frequency domain;room response;complex Gaussian random process;autocovariance function;ACVF;power spectral density;PSD;exponential decay;reverberation power;ARMA;audio source separation;Reverberation;Autoregressive processes;Mathematical model;Microphones;Frequency-domain analysis;Random processes;Statistical room acoustics;late reverberation;Gaussian random process;autoregressive moving average model},\n  doi = {10.1109/EUSIPCO.2016.7760494},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255085.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, the late part of a room response is modeled in the frequency domain as a complex Gaussian random process. The autocovariance function (ACVF) and power spectral density (PSD) are theoretically defined from the exponential decay of the late reverberation power. Furthermore we show that the ACVF and PSD are accurately parametrized by an autoregressive moving average (ARMA) model. This leads to a new generative model of late reverberation in the frequency domain. The ARMA parameters are easily estimated from the theoretical ACVF. The statistical characterization is consistent with empirical results on simulated and real data. This model could be used to incorporate priors in audio source separation and dereverberation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed multi-frequency image reconstruction for radio-interferometry.\n \n \n \n \n\n\n \n Deguignet, J.; Ferrari, A.; Mary, D.; and Ferrari, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1483-1487, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DistributedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760495,\n  author = {J. Deguignet and A. Ferrari and D. Mary and C. Ferrari},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Distributed multi-frequency image reconstruction for radio-interferometry},\n  year = {2016},\n  pages = {1483-1487},\n  abstract = {The advent of enhanced technologies in radio interferometry and the perspective of the SKA telescope bring new challenges in image reconstruction. One of these challenges is the spatio-spectral reconstruction of large (Terabytes) data cubes with high fidelity. This contribution proposes an alternative implementation of one such 3D prototype algorithm, MUFFIN (MUlti-Frequency image reconstruction For radio INterferometry), which combines spatial and spectral analysis priors. Using a recently proposed primal dual algorithm, this new version of MUFFIN allows a parallel implementation where computationally intensive steps are split by spectral channels. This parallelization allows to implement computationally demanding translation invariant wavelet transforms (IUWT), as opposed to the union of bases used previously. This alternative implementation is important as it opens the possibility of comparing these efficient dictionaries, and others, in spatio-spectral reconstruction. Numerical results show that the IUWT-based version can be successfully implemented at large scale with performances comparable to union of bases.},\n  keywords = {astronomical image processing;image reconstruction;radiotelescopes;radiowave interferometry;spectral analysis;wavelet transforms;SKA telescope;spatio-spectral reconstruction;MUFFIN;3D prototype algorithm;distributed multifrequency image reconstruction for radio INterferometry;spectral analysis;spatial analysis;primal dual algorithm;invariant wavelet transform;Image reconstruction;Signal processing algorithms;Inverse problems;Optimization;Transforms;Signal processing;Algorithm design and analysis},\n  doi = {10.1109/EUSIPCO.2016.7760495},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252317.pdf},\n}\n\n
\n
\n\n\n
\n The advent of enhanced technologies in radio interferometry and the perspective of the SKA telescope bring new challenges in image reconstruction. One of these challenges is the spatio-spectral reconstruction of large (Terabytes) data cubes with high fidelity. This contribution proposes an alternative implementation of one such 3D prototype algorithm, MUFFIN (MUlti-Frequency image reconstruction For radio INterferometry), which combines spatial and spectral analysis priors. Using a recently proposed primal dual algorithm, this new version of MUFFIN allows a parallel implementation where computationally intensive steps are split by spectral channels. This parallelization allows to implement computationally demanding translation invariant wavelet transforms (IUWT), as opposed to the union of bases used previously. This alternative implementation is important as it opens the possibility of comparing these efficient dictionaries, and others, in spatio-spectral reconstruction. Numerical results show that the IUWT-based version can be successfully implemented at large scale with performances comparable to union of bases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed adaptive filtering with reduced communication load.\n \n \n \n \n\n\n \n Utlu, I.; and Kozat, S. S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1488-1492, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DistributedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760496,\n  author = {I. Utlu and S. S. Kozat},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Distributed adaptive filtering with reduced communication load},\n  year = {2016},\n  pages = {1488-1492},\n  abstract = {We propose novel algorithms for distributed processing in applications constrained by available communication resources, using diffusion strategies that achieve up to three orders-of-magnitude reduction in communication load on the network, while delivering equal performance with respect to the state of the art. After computation of local estimates, the information is diffused among processing elements (or nodes) non-uniformly in time by conditioning the information transfer on level-crossings of the diffused parameter, resulting in a greatly reduced communication requirement. We provide the mean stability analysis of our algorithms, and illustrate the gain in communication efficiency compared to other reduced-communication distributed estimation schemes.},\n  keywords = {adaptive filters;distributed processing;estimation theory;distributed adaptive filtering;reduced communication load;distributed processing;three orders-of-magnitude reduction;mean stability analysis;reduced-communication distributed estimation schemes;Signal processing algorithms;Stability analysis;Algorithm design and analysis;Estimation;Quantization (signal);Europe},\n  doi = {10.1109/EUSIPCO.2016.7760496},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252085.pdf},\n}\n\n
\n
\n\n\n
\n We propose novel algorithms for distributed processing in applications constrained by available communication resources, using diffusion strategies that achieve up to three orders-of-magnitude reduction in communication load on the network, while delivering equal performance with respect to the state of the art. After computation of local estimates, the information is diffused among processing elements (or nodes) non-uniformly in time by conditioning the information transfer on level-crossings of the diffused parameter, resulting in a greatly reduced communication requirement. We provide the mean stability analysis of our algorithms, and illustrate the gain in communication efficiency compared to other reduced-communication distributed estimation schemes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Graph Laplacian distributed particle filtering.\n \n \n \n \n\n\n \n Rabbat, M.; Coates, M.; and Blouin, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1493-1497, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"GraphPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760497,\n  author = {M. Rabbat and M. Coates and S. Blouin},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Graph Laplacian distributed particle filtering},\n  year = {2016},\n  pages = {1493-1497},\n  abstract = {We address the problem of designing a distributed particle filter for tracking one or more targets using a sensor network. We propose a novel approach for reducing the communication overhead involved in the data fusion step. The approach uses graph-based signal processing to construct a transform of the joint log likelihood values of the particles. This transform is adaptive to particle locations and in many cases leads to a parsimonious representation, so that the the joint likelihood values of all particles can be accurately approximated using only a few transform coefficients. The proposed particle filter uses gossip to perform distributed, approximate computation of the transform coefficients. Numerical experiments highlight the potential of the proposed approach to provide accurate tracks with reduced communication overhead.},\n  keywords = {graph theory;Laplace transforms;particle filtering (numerical methods);sensor fusion;target tracking;graph Laplacian distributed particle filtering;target tracking;sensor network;communication overhead;data fusion;graph-based signal processing;joint log likelihood values;particle locations;parsimonious representation;transform coefficients;Transforms;Target tracking;Laplace equations;Symmetric matrices;Signal processing;Atmospheric measurements;Particle measurements},\n  doi = {10.1109/EUSIPCO.2016.7760497},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255849.pdf},\n}\n\n
\n
\n\n\n
\n We address the problem of designing a distributed particle filter for tracking one or more targets using a sensor network. We propose a novel approach for reducing the communication overhead involved in the data fusion step. The approach uses graph-based signal processing to construct a transform of the joint log likelihood values of the particles. This transform is adaptive to particle locations and in many cases leads to a parsimonious representation, so that the the joint likelihood values of all particles can be accurately approximated using only a few transform coefficients. The proposed particle filter uses gossip to perform distributed, approximate computation of the transform coefficients. Numerical experiments highlight the potential of the proposed approach to provide accurate tracks with reduced communication overhead.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Diffusion spline adaptive filtering.\n \n \n \n \n\n\n \n Scardapane, S.; Scarpiniti, M.; Comminiello, D.; and Uncini, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1498-1502, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DiffusionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760498,\n  author = {S. Scardapane and M. Scarpiniti and D. Comminiello and A. Uncini},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Diffusion spline adaptive filtering},\n  year = {2016},\n  pages = {1498-1502},\n  abstract = {Diffusion adaptation (DA) algorithms allow a network of agents to collectively estimate a parameter vector, by jointly minimizing the sum of their local cost functions. This is achieved by interleaving local update steps with `diffusion' steps, where information is combined with their own neighbors. In this paper, we propose a novel class of nonlinear diffusion filters, based on the recently proposed spline adaptive filter (SAF). A SAF learns nonlinear models by local interpolating polynomials, with a small overhead with respect to linear filters. This arises from the fact that only a small subset of parameters of the nonlinear component are adapted at every time-instant. By applying ideas from the DA framework, in this paper we derive a diffused version of the SAF, denoted as D-SAF. Experimental evaluations show that the D-SAF is able to robustly learn the underlying nonlinear model, with a significant gain compared to a non-cooperative solution.},\n  keywords = {adaptive filters;diffusion;interpolation;minimisation;nonlinear filters;splines (mathematics);diffusion spline adaptive filtering;diffusion adaptation;parameter vector estimation;local cost function sum minimization;interleaving local update step;nonlinear diffusion filter;local interpolating polynomial;DA framework;D-SAF;diffused version of the SAF;Splines (mathematics);Signal processing algorithms;Standards;Adaptation models;Interpolation;Signal processing;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760498},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256101.pdf},\n}\n\n
\n
\n\n\n
\n Diffusion adaptation (DA) algorithms allow a network of agents to collectively estimate a parameter vector, by jointly minimizing the sum of their local cost functions. This is achieved by interleaving local update steps with `diffusion' steps, where information is combined with their own neighbors. In this paper, we propose a novel class of nonlinear diffusion filters, based on the recently proposed spline adaptive filter (SAF). A SAF learns nonlinear models by local interpolating polynomials, with a small overhead with respect to linear filters. This arises from the fact that only a small subset of parameters of the nonlinear component are adapted at every time-instant. By applying ideas from the DA framework, in this paper we derive a diffused version of the SAF, denoted as D-SAF. Experimental evaluations show that the D-SAF is able to robustly learn the underlying nonlinear model, with a significant gain compared to a non-cooperative solution.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection over diffusion networks: Asymptotic tools for performance prediction and simulation.\n \n \n \n \n\n\n \n Matta, V.; Braca, P.; Marano, S.; and Sayed, A. H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1503-1507, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DetectionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760499,\n  author = {V. Matta and P. Braca and S. Marano and A. H. Sayed},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Detection over diffusion networks: Asymptotic tools for performance prediction and simulation},\n  year = {2016},\n  pages = {1503-1507},\n  abstract = {Exploiting recent progress [1]-[4] in the characterization of the detection performance of diffusion strategies over adaptive multi-agent networks: i) we present two theoretical approximations, one based on asymptotic normality and the other based on the theory of exact asymptotics; and ii) we develop an efficient simulation method by tailoring the importance sampling technique to diffusion adaptation. We show that these theoretical and experimental tools complement each other well, with their combination offering a substantial advance for a reliable quantitative detection-performance assessment. The analysis provides insight into the interplay between the network topology, the combination weights, and the inference performance, revealing the universal behavior of diffusion-based detectors over adaptive networks.},\n  keywords = {approximation theory;telecommunication network topology;adaptive multi-agent networks;diffusion strategies;asymptotic normality;sampling technique;network topology;Error probability;Steady-state;Adaptive systems;Monte Carlo methods;Limiting;Random variables;Europe;Distributed detection;adaptive network;diffusion;large deviations;exact asymptotics;importance sampling},\n  doi = {10.1109/EUSIPCO.2016.7760499},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256137.pdf},\n}\n\n
\n
\n\n\n
\n Exploiting recent progress [1]-[4] in the characterization of the detection performance of diffusion strategies over adaptive multi-agent networks: i) we present two theoretical approximations, one based on asymptotic normality and the other based on the theory of exact asymptotics; and ii) we develop an efficient simulation method by tailoring the importance sampling technique to diffusion adaptation. We show that these theoretical and experimental tools complement each other well, with their combination offering a substantial advance for a reliable quantitative detection-performance assessment. The analysis provides insight into the interplay between the network topology, the combination weights, and the inference performance, revealing the universal behavior of diffusion-based detectors over adaptive networks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian estimation of unknown parameters over networks.\n \n \n \n \n\n\n \n Djurić, P. M.; and Dedecius, K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1508-1512, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760500,\n  author = {P. M. Djurić and K. Dedecius},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian estimation of unknown parameters over networks},\n  year = {2016},\n  pages = {1508-1512},\n  abstract = {We address the problem of sequential parameter estimation over networks using the Bayesian methodology. Each node sequentially acquires independent observations, where all the observations in the network contain signal(s) with unknown parameters. The nodes aim at obtaining accurate estimates of the unknown parameters and to that end, they collaborate with their neighbors. They communicate to the neighbors their latest posterior distributions of the unknown parameters. The nodes fuse the received information by using mixtures with weights proportional to the predictive distributions obtained from the respective node posteriors. Then they update the fused posterior using the next acquired observation, and the process repeats. We demonstrate the performance of the proposed approach with computer simulations and confirm its validity.},\n  keywords = {Bayes methods;mixture models;network theory (graphs);parameter estimation;signal processing;unknown parameter over network Bayesian estimation;sequential parameter estimation over network problem;posterior distribution;mixture model;computer simulation;Nickel;Bayes methods;Europe;Estimation;Parameter estimation;Fuses;parameter estimation over networks;Bayes theory;mixture models;model averaging},\n  doi = {10.1109/EUSIPCO.2016.7760500},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256243.pdf},\n}\n\n
\n
\n\n\n
\n We address the problem of sequential parameter estimation over networks using the Bayesian methodology. Each node sequentially acquires independent observations, where all the observations in the network contain signal(s) with unknown parameters. The nodes aim at obtaining accurate estimates of the unknown parameters and to that end, they collaborate with their neighbors. They communicate to the neighbors their latest posterior distributions of the unknown parameters. The nodes fuse the received information by using mixtures with weights proportional to the predictive distributions obtained from the respective node posteriors. Then they update the fused posterior using the next acquired observation, and the process repeats. We demonstrate the performance of the proposed approach with computer simulations and confirm its validity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Toward privacy-preserving diffusion strategies for adaptation and learning over networks.\n \n \n \n \n\n\n \n El Khalil Harrane, I.; Flamary, R.; and Richard, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1513-1517, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"TowardPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760501,\n  author = {I. {El Khalil Harrane} and R. Flamary and C. Richard},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Toward privacy-preserving diffusion strategies for adaptation and learning over networks},\n  year = {2016},\n  pages = {1513-1517},\n  abstract = {Distributed optimization allows to address inference problems in a decentralized manner over networks, where agents can exchange information with their neighbors to improve their local estimates. Privacy preservation has become an important issue in many data mining applications. It aims at protecting the privacy of individual data in order to prevent the disclosure of sensitive information during the learning process. In this paper, we derive a diffusion strategy of the LMS type to solve distributed inference problems in the case where agents are also interested in preserving the privacy of the local measurements. We carry out a detailed mean and mean-square error analysis of the algorithm. Simulations are provided to check the theoretical findings.},\n  keywords = {data mining;data privacy;inference mechanisms;learning systems;mean square error methods;privacy-preserving diffusion;distributed optimization;information exchange;data mining;data privacy;sensitive information;learning process;LMS type diffusion;distributed inference problems;mean square error analysis;Data privacy;Signal processing algorithms;Algorithm design and analysis;Privacy;Distributed databases;Inference algorithms},\n  doi = {10.1109/EUSIPCO.2016.7760501},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256275.pdf},\n}\n\n
\n
\n\n\n
\n Distributed optimization allows to address inference problems in a decentralized manner over networks, where agents can exchange information with their neighbors to improve their local estimates. Privacy preservation has become an important issue in many data mining applications. It aims at protecting the privacy of individual data in order to prevent the disclosure of sensitive information during the learning process. In this paper, we derive a diffusion strategy of the LMS type to solve distributed inference problems in the case where agents are also interested in preserving the privacy of the local measurements. We carry out a detailed mean and mean-square error analysis of the algorithm. Simulations are provided to check the theoretical findings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian estimation for the local assessment of the multifractality parameter of multivariate time series.\n \n \n \n \n\n\n \n Combrexelle, S.; Wendt, H.; Altmann, Y.; Tourneret, J. -.; McLaughlin, S.; and Abry, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1518-1522, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760502,\n  author = {S. Combrexelle and H. Wendt and Y. Altmann and J. -. Tourneret and S. McLaughlin and P. Abry},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian estimation for the local assessment of the multifractality parameter of multivariate time series},\n  year = {2016},\n  pages = {1518-1522},\n  abstract = {Multifractal analysis (MF) is a widely used signal processing tool that enables the study of scale invariance models. Classical MF assumes homogeneous MF properties, which cannot always be guaranteed in practice. Yet, the local estimation of MF parameters has barely been considered due to the challenging statistical nature of MF processes (non-Gaussian, intricate dependence), requiring large sample sizes. This present work addresses this limitation and proposes a Bayesian estimator for local MF parameters of multivariate time series. The proposed Bayesian model builds on a recently introduced statistical model for leaders (i.e., specific multiresolution quantities designed for MF analysis purposes) that enabled the Bayesian estimation of MF parameters and extends it to multivariate non-overlapping time windows. It is formulated using spatially smoothing gamma Markov random field priors that counteract the large statistical variability of estimates for short time windows. Numerical simulations demonstrate that the proposed algorithm significantly outperforms current state-of-the-art estimators.},\n  keywords = {Bayes methods;Markov processes;numerical analysis;signal processing;time series;Bayesian estimation;local assessment;multifractality parameter;multivariate time series;multifractal analysis;signal processing tool;scale invariance models;homogeneous MF properties;statistical model;specific multiresolution quantity;multivariate nonoverlapping time windows;spatially smoothing gamma Markov random field;statistical variability;short time windows;numerical simulations;Time series analysis;Bayes methods;Estimation;Fractals;Analytical models;Biological system modeling;Markov processes;Multifractal analysis;Bayesian estimation;Multivariate time series;Whittle likelihood;GMRF},\n  doi = {10.1109/EUSIPCO.2016.7760502},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570244520.pdf},\n}\n\n
\n
\n\n\n
\n Multifractal analysis (MF) is a widely used signal processing tool that enables the study of scale invariance models. Classical MF assumes homogeneous MF properties, which cannot always be guaranteed in practice. Yet, the local estimation of MF parameters has barely been considered due to the challenging statistical nature of MF processes (non-Gaussian, intricate dependence), requiring large sample sizes. This present work addresses this limitation and proposes a Bayesian estimator for local MF parameters of multivariate time series. The proposed Bayesian model builds on a recently introduced statistical model for leaders (i.e., specific multiresolution quantities designed for MF analysis purposes) that enabled the Bayesian estimation of MF parameters and extends it to multivariate non-overlapping time windows. It is formulated using spatially smoothing gamma Markov random field priors that counteract the large statistical variability of estimates for short time windows. Numerical simulations demonstrate that the proposed algorithm significantly outperforms current state-of-the-art estimators.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A simple counting estimator of network agents' behaviors: Asymptotics.\n \n \n \n \n\n\n \n Marano, S.; and Willett, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1523-1527, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760503,\n  author = {S. Marano and P. Willett},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A simple counting estimator of network agents' behaviors: Asymptotics},\n  year = {2016},\n  pages = {1523-1527},\n  abstract = {Recent works address the problem of estimating agents' behaviors in complex networks, of which social networks are a prominent example. Many of the proposed techniques work but at the cost of a substantial computational complexity, which is not permitted when dealing with big data real-time analysis. This raises the question of whether a very simple nonparametric counting estimator works in practical problems. We propose such an estimator and investigate its asymptotic properties for large number of agents N and/or for large network observation time T. The asymptotic optimality of the estimator is proven and computer experiments are provided to assess its performance for finite values of N and T.},\n  keywords = {Big Data;complex networks;computational complexity;data analysis;network theory (graphs);social networking (online);network agent behaviors;agent behavior estimation;complex networks;social networks;substantial computational complexity;Big Data real-time analysis;asymptotic optimality;Computers;Random variables;Europe;Limiting;Electronic mail;Complex networks},\n  doi = {10.1109/EUSIPCO.2016.7760503},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251534.pdf},\n}\n\n
\n
\n\n\n
\n Recent works address the problem of estimating agents' behaviors in complex networks, of which social networks are a prominent example. Many of the proposed techniques work but at the cost of a substantial computational complexity, which is not permitted when dealing with big data real-time analysis. This raises the question of whether a very simple nonparametric counting estimator works in practical problems. We propose such an estimator and investigate its asymptotic properties for large number of agents N and/or for large network observation time T. The asymptotic optimality of the estimator is proven and computer experiments are provided to assess its performance for finite values of N and T.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Determining the number of signals correlated across multiple data sets for small sample support.\n \n \n \n \n\n\n \n Song, Y.; Hasija, T.; Schreier, P. J.; and Ramírez, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1528-1532, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DeterminingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760504,\n  author = {Y. Song and T. Hasija and P. J. Schreier and D. Ramírez},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Determining the number of signals correlated across multiple data sets for small sample support},\n  year = {2016},\n  pages = {1528-1532},\n  abstract = {This paper presents a detection scheme for determining the number of signals that are correlated across multiple data sets when the sample size is small compared to the dimensions of the data sets. To accommodate the sample-poor regime, we decouple the problem into several independent two-channel order-estimation problems that may be solved separately by a combination of principal component analysis (PCA) and canonical correlation analysis (CCA). Since the signals that are correlated across all data sets must be a subset of the signals that are correlated between any pair of data sets, we keep only the correlated signals for each pair of data sets. Then, a criterion inspired by a traditional information-theoretic criterion is applied to estimate the number of signals correlated across all data sets. The performance of the proposed scheme is verified by simulations.},\n  keywords = {correlation theory;principal component analysis;signal processing;small sample support;detection scheme;independent two-channel order-estimation problems;principal component analysis;canonical correlation analysis;CCA;PCA;information-theoretic criterion;Correlation;Covariance matrices;Principal component analysis;Signal processing;Data models;Detectors;Europe;Canonical correlation analysis;model-order selection;multiple data fields;principle component analysis;small sample support},\n  doi = {10.1109/EUSIPCO.2016.7760504},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255707.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a detection scheme for determining the number of signals that are correlated across multiple data sets when the sample size is small compared to the dimensions of the data sets. To accommodate the sample-poor regime, we decouple the problem into several independent two-channel order-estimation problems that may be solved separately by a combination of principal component analysis (PCA) and canonical correlation analysis (CCA). Since the signals that are correlated across all data sets must be a subset of the signals that are correlated between any pair of data sets, we keep only the correlated signals for each pair of data sets. Then, a criterion inspired by a traditional information-theoretic criterion is applied to estimate the number of signals correlated across all data sets. The performance of the proposed scheme is verified by simulations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improved resolution of chromatographic peak analysis using multi-snapshot imaging.\n \n \n \n \n\n\n \n Hopgood, J. R.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1533-1537, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760505,\n  author = {J. R. Hopgood},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Improved resolution of chromatographic peak analysis using multi-snapshot imaging},\n  year = {2016},\n  pages = {1533-1537},\n  abstract = {Snapshot imaging has a number of advantages in automated gel electrophoresis compared with the finish-line method in capillary electrophoresis, although at the expense of resolution. This paper presents a novel signal processing algorithm enabling a multi-capture imaging modality which improves resolution. The approach takes multiple snapshots as macromolecules are electrophoresed. Peaks from latter snapshots have higher resolution but poor signal-to-noise ratio (SNR), while peaks from earlier snapshots have lower resolution but better SNR. Signals at different capture-times are related by a scale-in-separation, amplitude scaling, and an arbitrary shift. The multiple captures are realigned and fused together using least-squares estimation and a physically inspired signal model. Since partial waveforms are observed as the chromatic peaks exit the sensor's field-of-view, this is accounted for in the realignment algorithm. The proposed technique yields improved resolution, improved fragment concentration and size estimates, and allows the removal of static background noise.},\n  keywords = {chromatography;image resolution;least squares approximations;improved resolution;chromatographic peak analysis;multisnapshot imaging;signal processing algorithm;multicapture imaging modality;least-squares estimation;Imaging;Image resolution;Signal resolution;Standards;Mathematical model;Signal processing algorithms;Chromatography;snapshot imaging;finish-line method;least-squares estimation;signal modelling},\n  doi = {10.1109/EUSIPCO.2016.7760505},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256232.pdf},\n}\n\n
\n
\n\n\n
\n Snapshot imaging has a number of advantages in automated gel electrophoresis compared with the finish-line method in capillary electrophoresis, although at the expense of resolution. This paper presents a novel signal processing algorithm enabling a multi-capture imaging modality which improves resolution. The approach takes multiple snapshots as macromolecules are electrophoresed. Peaks from latter snapshots have higher resolution but poor signal-to-noise ratio (SNR), while peaks from earlier snapshots have lower resolution but better SNR. Signals at different capture-times are related by a scale-in-separation, amplitude scaling, and an arbitrary shift. The multiple captures are realigned and fused together using least-squares estimation and a physically inspired signal model. Since partial waveforms are observed as the chromatic peaks exit the sensor's field-of-view, this is accounted for in the realignment algorithm. The proposed technique yields improved resolution, improved fragment concentration and size estimates, and allows the removal of static background noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Inexact alternating optimization for phase retrieval with outliers.\n \n \n \n \n\n\n \n Qian, C.; Fu, X.; Sidiropoulos, N. D.; and Huang, L.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1538-1542, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"InexactPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760506,\n  author = {C. Qian and X. Fu and N. D. Sidiropoulos and L. Huang},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Inexact alternating optimization for phase retrieval with outliers},\n  year = {2016},\n  pages = {1538-1542},\n  abstract = {Most of the available phase retrieval algorithms were explicitly or implicitly developed under a Gaussian noise model, using least squares (LS) formulations. However, in some applications of phase retrieval, an unknown subset of the measurements can be seriously corrupted by outliers, where LS is not robust and will degrade the estimation performance severely. This paper presents an Alternating Iterative Reweighted Least Squares (AIRLS) method for phase retrieval in the presence of such outliers. The AIRLS employs two-block alternating optimization to retrieve the signal through solving an ℓp-norm minimization problem, where 0 <; p <; 2. The Cramér-Rao bound (CRB) for Laplacian as well as Gaussian noise is derived for the measurement model considered, and simulations show that the proposed approach outperforms state-of-the-art algorithms in heavy-tailed noise.},\n  keywords = {Gaussian noise;iterative methods;least squares approximations;signal reconstruction;inexact alternating optimization;phase retrieval algorithms;Gaussian noise model;alternating iterative reweighted least squares method;ℓp-norm minimization problem;Cramer-Rao bound;Laplacian noise;Signal processing algorithms;Signal to noise ratio;Gaussian noise;Laplace equations;Atmospheric modeling;Robustness;Europe;Phase retrieval;iterative reweighted least squares;Cramér-Rao bound (CRB)},\n  doi = {10.1109/EUSIPCO.2016.7760506},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256343.pdf},\n}\n\n
\n
\n\n\n
\n Most of the available phase retrieval algorithms were explicitly or implicitly developed under a Gaussian noise model, using least squares (LS) formulations. However, in some applications of phase retrieval, an unknown subset of the measurements can be seriously corrupted by outliers, where LS is not robust and will degrade the estimation performance severely. This paper presents an Alternating Iterative Reweighted Least Squares (AIRLS) method for phase retrieval in the presence of such outliers. The AIRLS employs two-block alternating optimization to retrieve the signal through solving an ℓp-norm minimization problem, where 0 <; p <; 2. The Cramér-Rao bound (CRB) for Laplacian as well as Gaussian noise is derived for the measurement model considered, and simulations show that the proposed approach outperforms state-of-the-art algorithms in heavy-tailed noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian estimation of polynomial moving average models with unknown degree of nonlinearity.\n \n \n \n \n\n\n \n Karakuş, O.; Kuruoğlu, E. E.; and Altınkaya, M. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1543-1547, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760507,\n  author = {O. Karakuş and E. E. Kuruoğlu and M. A. Altınkaya},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian estimation of polynomial moving average models with unknown degree of nonlinearity},\n  year = {2016},\n  pages = {1543-1547},\n  abstract = {Various real world phenomena such as optical communication channels, power amplifiers and movement of sea vessels exhibit nonlinear characteristics. The nonlinearity degree of such systems is assumed to be known as a general intention. In this paper, we contribute to the literature with a Bayesian estimation method based on reversible jump Markov chain Monte Carlo (RJMCMC) for polynomial moving average (PMA) models. Our use of RJMCMC is novel and unique in the way of estimating both model memory and the nonlinearity degree. This offers greater flexibility to characterize the models which reflect different nonlinear characters of the measured data. In this study, we aim to demonstrate the potentials of RJMCMC in the identification for PMA models due to its potential of exploring nonlinear spaces of different degrees by sampling.},\n  keywords = {Bayes methods;estimation theory;Markov processes;Monte Carlo methods;moving average processes;polynomials;signal sampling;nonlinear characteristics;reversible jump Markov chain Monte Carlo;RJMCMC;polynomial moving average;PMA model;nonlinear space sampling;sea vessels;power amplifiers;optical communication channels;unknown degree of nonlinearity;polynomial moving average model;Bayesian estimation;Estimation;Bayes methods;Data models;Mathematical model;Europe;Signal processing;Markov processes;Polynomial MA;Nonlinearity degree estimation;Reversible Jump MCMC},\n  doi = {10.1109/EUSIPCO.2016.7760507},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252327.pdf},\n}\n\n
\n
\n\n\n
\n Various real world phenomena such as optical communication channels, power amplifiers and movement of sea vessels exhibit nonlinear characteristics. The nonlinearity degree of such systems is assumed to be known as a general intention. In this paper, we contribute to the literature with a Bayesian estimation method based on reversible jump Markov chain Monte Carlo (RJMCMC) for polynomial moving average (PMA) models. Our use of RJMCMC is novel and unique in the way of estimating both model memory and the nonlinearity degree. This offers greater flexibility to characterize the models which reflect different nonlinear characters of the measured data. In this study, we aim to demonstrate the potentials of RJMCMC in the identification for PMA models due to its potential of exploring nonlinear spaces of different degrees by sampling.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-scale image denoising based on goodness of fit (GOF) tests.\n \n \n \n \n\n\n \n ur Rehman , N.; Naveed, K.; Ehsan, S.; and McDonald-Maier, K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1548-1552, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-scalePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760508,\n  author = {N. {ur Rehman} and K. Naveed and S. Ehsan and K. McDonald-Maier},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-scale image denoising based on goodness of fit (GOF) tests},\n  year = {2016},\n  pages = {1548-1552},\n  abstract = {A novel image denoising method based on discrete wavelet transform (DWT) and goodness of fit (GOF) statistical tests employing empirical distribution function (EDF) statistics is proposed. We formulate the denoising problem into a hypothesis testing problem with a null hypothesis corresponding to the presence of noise, and alternate hypothesis representing the presence of only desired signal in the image samples being tested. The decision process involves GOF tests, employing statistics based on EDF, being applied directly on multiple image scales obtained from DWT. We evaluate the performance of the proposed method against the state of the art in wavelet image denoising through extensive experiments performed on standard images.},\n  keywords = {discrete wavelet transforms;image denoising;image representation;image sampling;multiscale image denoising;goodness of fit tests;GOF statistical tests;discrete wavelet transform;DWT;empirical distribution function statistics;EDF statistics;hypothesis testing problem;image samples;Optical fibers;Discrete wavelet transforms;Noise reduction;Image denoising;Two dimensional displays;Noise measurement;image denoising;wavelet transform;goodness of fit;empirical distribution function},\n  doi = {10.1109/EUSIPCO.2016.7760508},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252111.pdf},\n}\n\n
\n
\n\n\n
\n A novel image denoising method based on discrete wavelet transform (DWT) and goodness of fit (GOF) statistical tests employing empirical distribution function (EDF) statistics is proposed. We formulate the denoising problem into a hypothesis testing problem with a null hypothesis corresponding to the presence of noise, and alternate hypothesis representing the presence of only desired signal in the image samples being tested. The decision process involves GOF tests, employing statistics based on EDF, being applied directly on multiple image scales obtained from DWT. We evaluate the performance of the proposed method against the state of the art in wavelet image denoising through extensive experiments performed on standard images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Least squares image estimation in the presence of drift and pixel noise.\n \n \n \n \n\n\n \n Piazzo, L.; Raguso, M. C.; Carpio, J. G.; and Altieri, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1553-1557, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"LeastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760509,\n  author = {L. Piazzo and M. C. Raguso and J. G. Carpio and B. Altieri},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Least squares image estimation in the presence of drift and pixel noise},\n  year = {2016},\n  pages = {1553-1557},\n  abstract = {We discuss Least Squares (LS) image estimation for large data in the presence of electronic noise and drift. We introduce a data model where, in addition to the electronic noise and drift, also an additional type of noise, termed pixel noise, is considered. This noise arises when the sampling does not take place on a regular grid and may bias the estimate if not accounted for. Based on the model, we present an efficient Alternating Least Squares (ALS) algorithm, producing the LS image estimate. Finally, we apply the ALS to the data of the Photodetector Array Camera and Spectrometer (PACS), which is an infrared photometer onboard the European Space Agency (ESA) Herschel space telescope. In this context, we discuss the ALS implementation and complexity and present an example of the results.},\n  keywords = {cameras;image processing;least squares approximations;photodetectors;least squares image estimation;drift noise;pixel noise;LS image estimation;electronic noise;data model;regular grid;alternating least squares algorithm;ALS algorithm;photodetector array camera and spectrometer data;PACS data;onboard infrared photometer;European Space Agency Herschel space telescope;ESA Herschel space telescope;Mathematical model;Data models;Instruments;Estimation;Covariance matrices;Sensors;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760509},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252119.pdf},\n}\n\n
\n
\n\n\n
\n We discuss Least Squares (LS) image estimation for large data in the presence of electronic noise and drift. We introduce a data model where, in addition to the electronic noise and drift, also an additional type of noise, termed pixel noise, is considered. This noise arises when the sampling does not take place on a regular grid and may bias the estimate if not accounted for. Based on the model, we present an efficient Alternating Least Squares (ALS) algorithm, producing the LS image estimate. Finally, we apply the ALS to the data of the Photodetector Array Camera and Spectrometer (PACS), which is an infrared photometer onboard the European Space Agency (ESA) Herschel space telescope. In this context, we discuss the ALS implementation and complexity and present an example of the results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Data compression for snapshot mosaic hyperspectral image sensors.\n \n \n \n \n\n\n \n Tzagkarakis, G.; Charle, W.; and Tsakalides, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1558-1562, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DataPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760510,\n  author = {G. Tzagkarakis and W. Charle and P. Tsakalides},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Data compression for snapshot mosaic hyperspectral image sensors},\n  year = {2016},\n  pages = {1558-1562},\n  abstract = {Recent achievements in hyperspectral imaging (HSI) demonstrated successfully a novel snapshot mosaic sensor architecture, enabling spectral imaging in a truly compact way. Integration of this new technology in handheld devices necessitates efficient compression of HSI data. However, due to the specific mosaic structure of the acquired images, traditional compression methods tailored to full-resolution HSI data cubes fail to exploit the special spatio-spectral interrelations among the pixels. This paper introduces an efficient and computationally tractable compression technique for mosaic HSI images. Specifically, an appropriate decorrelator is constructed for exploiting the spatio-spectral redundancies among the pixels, by modeling the filters arrangement on the mosaic HSI sensor as a multiple-input multiple-output antenna array. Doing so, the decorrelator depends only on the sensor and not on the data to be compressed. Comparison with state-of-the-art compression methods designed for HSI data cubes reveals that our approach achieves better reconstruction quality at lower bits-per-pixel rates.},\n  keywords = {data compression;decorrelation;hyperspectral imaging;image coding;image reconstruction;image sensors;snapshot mosaic hyperspectral image sensor;full-resolution HSI data compression;spectral imaging;handheld device;spatio-spectral interrelation;computationally tractable compression technique;spatio-spectral redundancy;multiple-input multiple-output antenna array;reconstruction quality;Image coding;Correlation;Antenna arrays;Hyperspectral imaging;Decorrelation;Sensor arrays;MIMO;Hyperspectral data;snapshot mosaic hyperspectral sensor;image compression;spatio-spectral decorrelation},\n  doi = {10.1109/EUSIPCO.2016.7760510},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252144.pdf},\n}\n\n
\n
\n\n\n
\n Recent achievements in hyperspectral imaging (HSI) demonstrated successfully a novel snapshot mosaic sensor architecture, enabling spectral imaging in a truly compact way. Integration of this new technology in handheld devices necessitates efficient compression of HSI data. However, due to the specific mosaic structure of the acquired images, traditional compression methods tailored to full-resolution HSI data cubes fail to exploit the special spatio-spectral interrelations among the pixels. This paper introduces an efficient and computationally tractable compression technique for mosaic HSI images. Specifically, an appropriate decorrelator is constructed for exploiting the spatio-spectral redundancies among the pixels, by modeling the filters arrangement on the mosaic HSI sensor as a multiple-input multiple-output antenna array. Doing so, the decorrelator depends only on the sensor and not on the data to be compressed. Comparison with state-of-the-art compression methods designed for HSI data cubes reveals that our approach achieves better reconstruction quality at lower bits-per-pixel rates.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compressed sensing super resolution of color images.\n \n \n \n \n\n\n \n Saafin, W.; Vega, M.; Molina, R.; and Katsaggelos, A. K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1563-1567, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CompressedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@InProceedings{7760511,\n  author = {W. Saafin and M. Vega and R. Molina and A. K. Katsaggelos},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Compressed sensing super resolution of color images},\n  year = {2016},\n  pages = {1563-1567},\n  abstract = {In this work we estimate Super Resolution (SR) images from a sequence of true color Compressed Sensing (CS) observations. The red, green, blue (RGB) channels are sensed separately using a measurement matrix that can be synthesized practically. The joint optimization problem to estimate the registration parameters, and the High Resolution (HR) image is transformed into a sequence of unconstrained optimization sub-problems using the Alternate Direction Method of Multipliers (ADMM). A new, simple, and accurate, image registration procedure is proposed. The performed experiments show that the proposed method compares favorably to existing color CS reconstruction methods at unity zooming factor (P), obtaining very good performance varying P and the compression factor simultaneously. The algorithm is tested on real and synthetic images.},\n  keywords = {compressed sensing;data compression;image coding;image colour analysis;image registration;image resolution;matrix algebra;optimisation;color image compressed sensing super resolution;SR image;CS observation;red,green,blue channel;RGB channel;measurement matrix;joint optimization problem;registration parameters, estimate;high resolution image;HR image;unconstrained optimization subproblem;alternate direction method of multiplier;ADMM;image registration procedure;unity zooming factor;compression factor;Image color analysis;Optimization;Image reconstruction;Europe;Compressed sensing;Image resolution;Color;Super resolution;compressed sensing;color images;image reconstruction;image enhancement},\n  doi = {10.1109/EUSIPCO.2016.7760511},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252184.pdf},\n}\n\n
\n
\n\n\n
\n In this work we estimate Super Resolution (SR) images from a sequence of true color Compressed Sensing (CS) observations. The red, green, blue (RGB) channels are sensed separately using a measurement matrix that can be synthesized practically. The joint optimization problem to estimate the registration parameters, and the High Resolution (HR) image is transformed into a sequence of unconstrained optimization sub-problems using the Alternate Direction Method of Multipliers (ADMM). A new, simple, and accurate, image registration procedure is proposed. The performed experiments show that the proposed method compares favorably to existing color CS reconstruction methods at unity zooming factor (P), obtaining very good performance varying P and the compression factor simultaneously. The algorithm is tested on real and synthetic images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Classification of hyperspectral images using mixture of probabilistic PCA models.\n \n \n \n \n\n\n \n Kutluk, S.; Kayabol, K.; and Akan, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1568-1572, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ClassificationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760512,\n  author = {S. Kutluk and K. Kayabol and A. Akan},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Classification of hyperspectral images using mixture of probabilistic PCA models},\n  year = {2016},\n  pages = {1568-1572},\n  abstract = {We propose a supervised classification and dimensionality reduction method for hyperspectral images. The proposed method contains a mixture of probabilistic principal component analysis (PPCA) models. Using the PPCA in the mixture model inherently provides a dimensionality reduction. Defining the mixture model to be spatially varying, we are also able to include spatial information into the classification process. In this way, the proposed mixture model allows dimensionality reduction and spectral-spatial classification of hyperspectral image at the same time. The experimental results obtained on real hyperspectral data show that the proposed method yields better classification performance compared to state of the art methods.},\n  keywords = {hyperspectral imaging;image classification;mixture models;principal component analysis;hyperspectral image classification;probabilistic PCA mixture model;supervised classification;dimensionality reduction method;probabilistic principal component analysis model;PPCA mixture model;spectral-spatial classification;Hyperspectral imaging;Principal component analysis;Mixture models;Probabilistic logic;Mathematical model;Training;hyperspectral image;probabilistic principal component analysis;dimensionality reduction;mixture models},\n  doi = {10.1109/EUSIPCO.2016.7760512},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252186.pdf},\n}\n\n
\n
\n\n\n
\n We propose a supervised classification and dimensionality reduction method for hyperspectral images. The proposed method contains a mixture of probabilistic principal component analysis (PPCA) models. Using the PPCA in the mixture model inherently provides a dimensionality reduction. Defining the mixture model to be spatially varying, we are also able to include spatial information into the classification process. In this way, the proposed mixture model allows dimensionality reduction and spectral-spatial classification of hyperspectral image at the same time. The experimental results obtained on real hyperspectral data show that the proposed method yields better classification performance compared to state of the art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A DCT-based multiscale binary descriptor robust to complex brightness changes.\n \n \n \n \n\n\n \n Aslan, S.; Yamaç, M.; and Sankur, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1573-1577, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760513,\n  author = {S. Aslan and M. Yamaç and B. Sankur},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A DCT-based multiscale binary descriptor robust to complex brightness changes},\n  year = {2016},\n  pages = {1573-1577},\n  abstract = {Binary descriptors have been very popular in recent years. One reason is that the algorithms that use them become computationally and memory-wise efficient. Furthermore, they tend to have some inherent robustness against some geometrical variations and against various brightness changes. These changes might result from both internal factors and external factors such as location of the light source, viewing angle, scene properties. In this paper, we describe a binary descriptor which proves to be robust to complex brightness changes such as gamma correction, noise and photometric distortions. The experimental results demonstrate that performance of the descriptor in object recognition and local image analysis tasks.},\n  keywords = {brightness;discrete cosine transforms;object recognition;DCT-based multiscale binary descriptor;complex brightness;geometrical variations;brightness changes;photometric distortions;gamma correction;object recognition;image analysis;Discrete cosine transforms;Brightness;Robustness;Nonlinear distortion;Lighting;Image coding;binary descriptor;robustness to photometric distortion and brightness change},\n  doi = {10.1109/EUSIPCO.2016.7760513},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252339.pdf},\n}\n\n
\n
\n\n\n
\n Binary descriptors have been very popular in recent years. One reason is that the algorithms that use them become computationally and memory-wise efficient. Furthermore, they tend to have some inherent robustness against some geometrical variations and against various brightness changes. These changes might result from both internal factors and external factors such as location of the light source, viewing angle, scene properties. In this paper, we describe a binary descriptor which proves to be robust to complex brightness changes such as gamma correction, noise and photometric distortions. The experimental results demonstrate that performance of the descriptor in object recognition and local image analysis tasks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A better metric in kernel adaptive filtering.\n \n \n \n \n\n\n \n Takeuchi, A.; Yukawa, M.; and Müller, K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1578-1582, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760514,\n  author = {A. Takeuchi and M. Yukawa and K. Müller},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A better metric in kernel adaptive filtering},\n  year = {2016},\n  pages = {1578-1582},\n  abstract = {The metric in the reproducing kernel Hilbert space (RKHS) is known to be given by the Gram matrix (which is also called the kernel matrix). It has been reported that the metric leads to a decorrelation of the kernelized input vector because its autocorrelation matrix can be approximated by the (down scaled) squared Gram matrix subject to some condition. In this paper, we derive a better metric (a best one under the condition) based on the approximation, and present an adaptive algorithm using the metric. Although the algorithm has quadratic complexity, we present its linear-complexity version based on a selective updating strategy. Numerical examples validate the approximation in a practical scenario, and show that the proposed metric yields fast convergence and tracking performance.},\n  keywords = {adaptive filters;decorrelation;Hilbert spaces;matrix algebra;vectors;kernel adaptive filtering;reproducing kernel Hilbert space;RKHS;kernel matrix;decorrelation metric;kernelized input vector;autocorrelation matrix;squared gram matrix;adaptive algorithm;quadratic complexity;linear-complexity version;selective updating strategy;fast convergence performance;fast tracking performance;Kernel;Dictionaries;Signal processing algorithms;Hilbert space;Correlation;Eigenvalues and eigenfunctions},\n  doi = {10.1109/EUSIPCO.2016.7760514},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252039.pdf},\n}\n\n
\n
\n\n\n
\n The metric in the reproducing kernel Hilbert space (RKHS) is known to be given by the Gram matrix (which is also called the kernel matrix). It has been reported that the metric leads to a decorrelation of the kernelized input vector because its autocorrelation matrix can be approximated by the (down scaled) squared Gram matrix subject to some condition. In this paper, we derive a better metric (a best one under the condition) based on the approximation, and present an adaptive algorithm using the metric. Although the algorithm has quadratic complexity, we present its linear-complexity version based on a selective updating strategy. Numerical examples validate the approximation in a practical scenario, and show that the proposed metric yields fast convergence and tracking performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nonlinear blind source separation for sparse sources.\n \n \n \n \n\n\n \n Ehsandoust, B.; Rivet, B.; Jutten, C.; and Babaie-Zadeh, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1583-1587, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"NonlinearPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760515,\n  author = {B. Ehsandoust and B. Rivet and C. Jutten and M. Babaie-Zadeh},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Nonlinear blind source separation for sparse sources},\n  year = {2016},\n  pages = {1583-1587},\n  abstract = {Blind Source Separation (BSS) is the problem of separating signals which are mixed through an unknown function from a number of observations, without any information about the mixing model. Although it has been mathematically proven that the separation can be done when the mixture is linear, there is not any proof for the separability of nonlinearly mixed signals. Our contribution in this paper is performing nonlinear BSS for sparse sources. It is shown in this case, sources are separable even if the problem is under-determined (the number of observations is less than the number of source signals). However in the most general case (when the nonlinear mixing model can be of any kind and there is no side-information about that), an unknown nonlinear transformation of each source is reconstructed. It is shown why the problem reconstructing the exact sources is severely ill-posed and impossible to do without any other information.},\n  keywords = {blind source separation;nonlinear blind source separation;sparse sources;nonlinearly mixed signals;nonlinear mixing model;unknown nonlinear transformation;Manifolds;Signal processing algorithms;Europe;Blind source separation;Indexes;Blind Source Separation;Independent Component Analysis;Sparse Signals;Manifold Learning},\n  doi = {10.1109/EUSIPCO.2016.7760515},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252167.pdf},\n}\n\n
\n
\n\n\n
\n Blind Source Separation (BSS) is the problem of separating signals which are mixed through an unknown function from a number of observations, without any information about the mixing model. Although it has been mathematically proven that the separation can be done when the mixture is linear, there is not any proof for the separability of nonlinearly mixed signals. Our contribution in this paper is performing nonlinear BSS for sparse sources. It is shown in this case, sources are separable even if the problem is under-determined (the number of observations is less than the number of source signals). However in the most general case (when the nonlinear mixing model can be of any kind and there is no side-information about that), an unknown nonlinear transformation of each source is reconstructed. It is shown why the problem reconstructing the exact sources is severely ill-posed and impossible to do without any other information.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed kernel least squares for nonlinear regression applied to sensor networks.\n \n \n \n \n\n\n \n Shin, B.; Paul, H.; and Dekorsy, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1588-1592, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DistributedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760516,\n  author = {B. Shin and H. Paul and A. Dekorsy},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Distributed kernel least squares for nonlinear regression applied to sensor networks},\n  year = {2016},\n  pages = {1588-1592},\n  abstract = {In this paper, we address the task of distributed nonlinear regression. For this, we exploit kernel methods which can cope with nonlinear regression tasks and a consensus-based approach to derive a distributed scheme. Both techniques are combined and a distributed kernel-based least squares algorithm for nonlinear function regression is proposed. We apply our algorithm to sensor networks and the distributed estimation of diffusion fields which are known to be highly nonlinear. Performance evaluations regarding static and time-varying fields with multiple sources and arbitrary network topologies are provided showing a successful reconstruction. For the tracking of time-varying fields our proposed algorithm outperforms the state of the art.},\n  keywords = {distributed sensors;least mean squares methods;nonlinear functions;regression analysis;distributed kernel least squares;sensor networks;distributed nonlinear regression;nonlinear function regression;distributed estimation;time-varying fields;Kernel;Dictionaries;Signal processing algorithms;Estimation;Approximation algorithms;Temperature measurement;Position measurement},\n  doi = {10.1109/EUSIPCO.2016.7760516},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252196.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we address the task of distributed nonlinear regression. For this, we exploit kernel methods which can cope with nonlinear regression tasks and a consensus-based approach to derive a distributed scheme. Both techniques are combined and a distributed kernel-based least squares algorithm for nonlinear function regression is proposed. We apply our algorithm to sensor networks and the distributed estimation of diffusion fields which are known to be highly nonlinear. Performance evaluations regarding static and time-varying fields with multiple sources and arbitrary network topologies are provided showing a successful reconstruction. For the tracking of time-varying fields our proposed algorithm outperforms the state of the art.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Boosted LMS-based piecewise linear adaptive filters.\n \n \n \n \n\n\n \n Kari, D.; Marivani, I.; Delibalta, I.; and Kozat, S. S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1593-1597, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"BoostedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760517,\n  author = {D. Kari and I. Marivani and I. Delibalta and S. S. Kozat},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Boosted LMS-based piecewise linear adaptive filters},\n  year = {2016},\n  pages = {1593-1597},\n  abstract = {We introduce the boosting notion extensively used in different machine learning applications to adaptive signal processing literature and implement several different adaptive filtering algorithms. In this framework, we have several adaptive constituent filters that run in parallel. For each newly received input vector and observation pair, each filter adapts itself based on the performance of the other adaptive filters in the mixture on this current data pair. These relative updates provide the boosting effect such that the filters in the mixture learn a different attribute of the data providing diversity. The outputs of these constituent filters are then combined using adaptive mixture approaches. We provide the computational complexity bounds for the boosted adaptive filters. The introduced methods demonstrate improvement in the performances of conventional adaptive filtering algorithms due to the boosting effect.},\n  keywords = {adaptive filters;adaptive signal processing;computational complexity;learning (artificial intelligence);least mean squares methods;piecewise linear techniques;boosted LMS-based piecewise linear adaptive filter;machine learning application;adaptive signal processing;adaptive mixture approach;computational complexity;Boosting;Signal processing algorithms;Adaptive filters;Machine learning algorithms;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760517},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252238.pdf},\n}\n\n
\n
\n\n\n
\n We introduce the boosting notion extensively used in different machine learning applications to adaptive signal processing literature and implement several different adaptive filtering algorithms. In this framework, we have several adaptive constituent filters that run in parallel. For each newly received input vector and observation pair, each filter adapts itself based on the performance of the other adaptive filters in the mixture on this current data pair. These relative updates provide the boosting effect such that the filters in the mixture learn a different attribute of the data providing diversity. The outputs of these constituent filters are then combined using adaptive mixture approaches. We provide the computational complexity bounds for the boosted adaptive filters. The introduced methods demonstrate improvement in the performances of conventional adaptive filtering algorithms due to the boosting effect.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Joint EEG — EMG signal processing for identification of the mental tasks in patients with neurological diseases.\n \n \n \n\n\n \n Geman, O.; Chiuchisan, I.; Covasa, M.; Eftaxias, K.; Sanei, S.; Madeira, J. G. F.; and Boloy, R. A. M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1598-1602, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760518,\n  author = {O. Geman and I. Chiuchisan and M. Covasa and K. Eftaxias and S. Sanei and J. G. F. Madeira and R. A. M. Boloy},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint EEG — EMG signal processing for identification of the mental tasks in patients with neurological diseases},\n  year = {2016},\n  pages = {1598-1602},\n  abstract = {Correlation size together with Lyapunov exponents estimated from both electroencephalography (EEG) and electromyography (EMG) signals, are the crucial variables in the classification of mental tasks using an artificial neural network (ANN) classifier for patients suffering from neurological disorders/diseases. The above parameters vary according to the status of the patient, for example: depending on how stressed or relaxed the patient is and what mental task is executed. The signals were acquired from patients with Parkinson disease, while they performed four different mental tasks. The performed mental states, detected with high specificity and accuracy, can help a completely paralyzed person (locked-in) to communicate with the environment through the brain waves, leading to increasing their quality of life.},\n  keywords = {biomedical communication;diseases;electroencephalography;electromyography;medical computing;medical signal processing;neural nets;patient diagnosis;signal classification;joint EEG-EMG signal processing;mental task identification;neurological disease;Lyapunov exponent estimation;electromyography signal;electroencephalography signal;mental task classification;artificial neural network classifier;ANN classifier;neurological disorder;Parkinson disease;patient mental state;completely paralyzed person communication;brain wave;life quality enhancement;Electroencephalography;Electromyography;Diseases;Artificial neural networks;Correlation;Signal processing;Europe;Artificial Neural Network;Biomedical Signal Processing;EEG;EMG;Mental Tasks;Neurological Diseases},\n  doi = {10.1109/EUSIPCO.2016.7760518},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n Correlation size together with Lyapunov exponents estimated from both electroencephalography (EEG) and electromyography (EMG) signals, are the crucial variables in the classification of mental tasks using an artificial neural network (ANN) classifier for patients suffering from neurological disorders/diseases. The above parameters vary according to the status of the patient, for example: depending on how stressed or relaxed the patient is and what mental task is executed. The signals were acquired from patients with Parkinson disease, while they performed four different mental tasks. The performed mental states, detected with high specificity and accuracy, can help a completely paralyzed person (locked-in) to communicate with the environment through the brain waves, leading to increasing their quality of life.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Newton-like nonlinear adaptive filters via simple multilinear functionals.\n \n \n \n \n\n\n \n Pinheiro, F. C.; and Lopes, C. G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1603-1607, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Newton-likePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760519,\n  author = {F. C. Pinheiro and C. G. Lopes},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Newton-like nonlinear adaptive filters via simple multilinear functionals},\n  year = {2016},\n  pages = {1603-1607},\n  abstract = {In the context of nonlinear systems identification new Affine Projection Algorithm (APA) and NLMS adaptive filters (AFs) are developed over the Simple Multilinear model (SML). Such a model is comprised of a product of linear filters and allows for an exponential decrease in complexity when compared to the complete Volterra model. The MSE surface is developed in terms of data statistical moments and its gradient vector is presented, computing the corresponding Hessian matrix in the sequel. The AFs are generated via stochastic approximations for the data moments and a series of non-trivial derivations resulting in an APA implementation structurally similar to the standard APA recursion. The NLMS algorithm is derived as a particular case. Simulations show good convergence properties when identifying unknown SML and Volterra plants.},\n  keywords = {adaptive filters;Hessian matrices;Volterra equations;Volterra plants;NLMS algorithm;stochastic approximations;MSE;Volterra model;linear filters;simple multilinear model;NLMS adaptive filters;affine projection algorithm;nonlinear systems identification;multilinear functionals;Newton-like nonlinear adaptive filters;Signal processing algorithms;Adaptation models;Matrices;Approximation algorithms;Indexes;Computational modeling},\n  doi = {10.1109/EUSIPCO.2016.7760519},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256524.pdf},\n}\n\n
\n
\n\n\n
\n In the context of nonlinear systems identification new Affine Projection Algorithm (APA) and NLMS adaptive filters (AFs) are developed over the Simple Multilinear model (SML). Such a model is comprised of a product of linear filters and allows for an exponential decrease in complexity when compared to the complete Volterra model. The MSE surface is developed in terms of data statistical moments and its gradient vector is presented, computing the corresponding Hessian matrix in the sequel. The AFs are generated via stochastic approximations for the data moments and a series of non-trivial derivations resulting in an APA implementation structurally similar to the standard APA recursion. The NLMS algorithm is derived as a particular case. Simulations show good convergence properties when identifying unknown SML and Volterra plants.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectrum sensing using energy detectors with performance computation capabilities.\n \n \n \n \n\n\n \n Rugini, L.; Banelli, P.; and Leus, G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1608-1612, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SpectrumPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760520,\n  author = {L. Rugini and P. Banelli and G. Leus},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Spectrum sensing using energy detectors with performance computation capabilities},\n  year = {2016},\n  pages = {1608-1612},\n  abstract = {We focus on the performance of the energy detector for cognitive radio applications. Our aim is to incorporate, into the energy detector, low-complexity algorithms that compute the performance of the detector itself. The main parameters of interest are the probability of detection and the required number of samples. Since the exact performance analysis involves complicated functions of two variables, such as the regularized lower incomplete Gamma function, we introduce new low-complexity approximations based on algebraic transformations of the one-dimensional Gaussian Q-function. The numerical comparison of the proposed approximations with the exact analysis highlights the good accuracy of the low-complexity computation approach.},\n  keywords = {cognitive radio;computational complexity;Gaussian distribution;radio spectrum management;signal detection;cognitive radio;spectrum sensing;energy detectors;low-complexity computation;one-dimensional Gaussian Q-function;algebraic transformations;low-complexity approximations;gamma function;complicated functions;performance analysis;detection probability;low-complexity algorithms;performance computation capability;Random variables;Gaussian approximation;Signal to noise ratio;Detectors;Performance evaluation;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760520},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256113.pdf},\n}\n\n
\n
\n\n\n
\n We focus on the performance of the energy detector for cognitive radio applications. Our aim is to incorporate, into the energy detector, low-complexity algorithms that compute the performance of the detector itself. The main parameters of interest are the probability of detection and the required number of samples. Since the exact performance analysis involves complicated functions of two variables, such as the regularized lower incomplete Gamma function, we introduce new low-complexity approximations based on algebraic transformations of the one-dimensional Gaussian Q-function. The numerical comparison of the proposed approximations with the exact analysis highlights the good accuracy of the low-complexity computation approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A dual-function MIMO radar-communications system using PSK modulation.\n \n \n \n \n\n\n \n Hassanien, A.; Amin, M. G.; Zhang, Y. D.; and Himed, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1613-1617, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760521,\n  author = {A. Hassanien and M. G. Amin and Y. D. Zhang and B. Himed},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A dual-function MIMO radar-communications system using PSK modulation},\n  year = {2016},\n  pages = {1613-1617},\n  abstract = {In this paper, we develop a new technique for information embedding into the emission of multiple-input multiple-output (MIMO) radar using dual-functionality platforms. A set of orthogonal waveforms occupying the same band is used to implement the primary MIMO radar operation. The secondary communication function is implemented by embedding one phase-shift keying (PSK) communication symbol in each orthogonal waveform, i.e., the number of embedded communication symbols during each radar pulse equals the number of transmit antennas. We show that the communication operation is transparent to the MIMO radar operation. The communication receiver detects the embedded PSK symbols using standard ratio testing. The achievable data rate is proportional to the pulse repetition frequency, the number of transmit elements, and the size of the PSK constellation. The performance of the proposed technique is investigated in terms of the symbol error rate. Simulations examples demonstrate that data rates in the range of several Mbps can be embedded and reliably detected.},\n  keywords = {error statistics;MIMO radar;phase shift keying;radar receivers;transmitting antennas;dual-function MIMO radar-communications system;multiple-input multiple-output radar;embedded PSK modulation;information embedding;dual-functionality platforms;orthogonal waveforms;secondary communication function;phase shift keying;embedded communication symbols;radar pulse;transmit antennas;communication receiver;standard ratio testing;pulse repetition frequency;PSK constellation;symbol error rate;Receivers;MIMO radar;Phase shift keying;Radar antennas;MIMO;Communication symbols},\n  doi = {10.1109/EUSIPCO.2016.7760521},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256228.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we develop a new technique for information embedding into the emission of multiple-input multiple-output (MIMO) radar using dual-functionality platforms. A set of orthogonal waveforms occupying the same band is used to implement the primary MIMO radar operation. The secondary communication function is implemented by embedding one phase-shift keying (PSK) communication symbol in each orthogonal waveform, i.e., the number of embedded communication symbols during each radar pulse equals the number of transmit antennas. We show that the communication operation is transparent to the MIMO radar operation. The communication receiver detects the embedded PSK symbols using standard ratio testing. The achievable data rate is proportional to the pulse repetition frequency, the number of transmit elements, and the size of the PSK constellation. The performance of the proposed technique is investigated in terms of the symbol error rate. Simulations examples demonstrate that data rates in the range of several Mbps can be embedded and reliably detected.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Self-backhauling full-duplex access node with massive antenna arrays: Power allocation and achievable sum-rate.\n \n \n \n \n\n\n \n Korpi, D.; Riihonen, T.; and Valkama, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1618-1622, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Self-backhaulingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760522,\n  author = {D. Korpi and T. Riihonen and M. Valkama},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Self-backhauling full-duplex access node with massive antenna arrays: Power allocation and achievable sum-rate},\n  year = {2016},\n  pages = {1618-1622},\n  abstract = {This paper analyzes a self-backhauling inband full-duplex access node that has massive antenna arrays for transmission and reception. In particular, the optimal transmit powers for such a system are solved in a closed form, taking into account the self-interference as well as backhaul capacity requirements and incorporating the role of downlink-uplink traffic ratio in sum-rate maximization. Numerical results are also provided, where the obtained analytical expressions are evaluated with realistic system parameter values. All in all, the presented theory and the numerical results provide insights into the proposed system, indicating that a self-backhauling access node could greatly benefit from being capable of inband full-duplex communication.},\n  keywords = {antenna arrays;optimisation;radio reception;radiofrequency interference;telecommunication traffic;downlink-uplink traffic ratio role;backhaul capacity requirement;optimal transmit power;self-backhauling inband full-duplex access node;achievable sum-rate maximization;power allocation;massive antenna array;Downlink;Uplink;Antenna arrays;Interference cancellation;Linear programming;Array signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760522},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255094.pdf},\n}\n\n
\n
\n\n\n
\n This paper analyzes a self-backhauling inband full-duplex access node that has massive antenna arrays for transmission and reception. In particular, the optimal transmit powers for such a system are solved in a closed form, taking into account the self-interference as well as backhaul capacity requirements and incorporating the role of downlink-uplink traffic ratio in sum-rate maximization. Numerical results are also provided, where the obtained analytical expressions are evaluated with realistic system parameter values. All in all, the presented theory and the numerical results provide insights into the proposed system, indicating that a self-backhauling access node could greatly benefit from being capable of inband full-duplex communication.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Memory error resilient detection for massive MIMO systems.\n \n \n \n \n\n\n \n Tomashevich, V.; and Polian, I.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1623-1627, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MemoryPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760523,\n  author = {V. Tomashevich and I. Polian},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Memory error resilient detection for massive MIMO systems},\n  year = {2016},\n  pages = {1623-1627},\n  abstract = {Massive MIMO systems employing hundreds of antennas at the base station (BS) are considered a breakthrough technology to provide users with high data rates. However, large number of antennas demands memories of large size which are prone to faults due to current aggressive technology downscaling. This paper introduces a novel nonlinear minimum mean square error (NMMSE) based detection algorithm that takes memory errors into account. The proposed detection method is able to handle multiple memory errors with low computational overhead. Simulation reports that the proposed solution significantly reduces the impact of multiple memory errors on the bit error rate (BER).},\n  keywords = {error statistics;least mean squares methods;MIMO communication;massive MIMO systems;memory error resilient detection;base station antennas;aggressive technology downscaling;nonlinear minimum mean square error;NMMSE based detection;multiple memory errors;computational overhead;bit error rate;BER;MIMO;Receivers;Antennas;Covariance matrices;Uplink;Probability density function;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760523},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252098.pdf},\n}\n\n
\n
\n\n\n
\n Massive MIMO systems employing hundreds of antennas at the base station (BS) are considered a breakthrough technology to provide users with high data rates. However, large number of antennas demands memories of large size which are prone to faults due to current aggressive technology downscaling. This paper introduces a novel nonlinear minimum mean square error (NMMSE) based detection algorithm that takes memory errors into account. The proposed detection method is able to handle multiple memory errors with low computational overhead. Simulation reports that the proposed solution significantly reduces the impact of multiple memory errors on the bit error rate (BER).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A necessary and sufficient condition for the blind extraction of the sparsest source in convolutive mixtures.\n \n \n \n \n\n\n \n Batany, Y.; Donno, D.; Duarte, L. T.; Chauris, H.; Deville, Y.; and Romano, J. M. T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1628-1632, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760524,\n  author = {Y. Batany and D. Donno and L. T. Duarte and H. Chauris and Y. Deville and J. M. T. Romano},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A necessary and sufficient condition for the blind extraction of the sparsest source in convolutive mixtures},\n  year = {2016},\n  pages = {1628-1632},\n  abstract = {This paper addresses sparse component analysis, a powerful framework for blind source separation and extraction that is built upon the assumption that the sources of interest are sparse in a known domain. We propose and discuss a necessary and sufficient condition under which the ℓ0 pseudo-norm can be used as a contrast function in the blind source extraction problem in both instantaneous and convolutive mixing models, when the number of observations is at least equal to the number of sources. The obtained conditions allow us to relax the sparsity constraint of the sources to its maximum limit, with possibly overlapping sources. In particular, the W-disjoint orthogonality assumption of the sources can be discarded. Moreover, no assumption is done on the mixing system except invertibility. A differential evolution algorithm based on a smooth approximation of the ℓ0 pseudo-norm is used to illustrate the benefits brought by our contribution.},\n  keywords = {blind source separation;convolution;evolutionary computation;blind extraction;sparsest source;convolutive mixtures;sparse component analysis;blind source separation;sparsity constraint;differential evolution algorithm;smooth approximation;Blind source separation;Europe;Signal processing algorithms;MIMO;Indexes;Electronic mail;Blind Source Separation;Blind Source Extraction;Convolutive Mixture;Sparse Component Analysis;ℓ0 pseudo-norm;MIMO},\n  doi = {10.1109/EUSIPCO.2016.7760524},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256307.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses sparse component analysis, a powerful framework for blind source separation and extraction that is built upon the assumption that the sources of interest are sparse in a known domain. We propose and discuss a necessary and sufficient condition under which the ℓ0 pseudo-norm can be used as a contrast function in the blind source extraction problem in both instantaneous and convolutive mixing models, when the number of observations is at least equal to the number of sources. The obtained conditions allow us to relax the sparsity constraint of the sources to its maximum limit, with possibly overlapping sources. In particular, the W-disjoint orthogonality assumption of the sources can be discarded. Moreover, no assumption is done on the mixing system except invertibility. A differential evolution algorithm based on a smooth approximation of the ℓ0 pseudo-norm is used to illustrate the benefits brought by our contribution.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Memory and complexity reduction in parahermitian matrix manipulations of PEVD algorithms.\n \n \n \n \n\n\n \n Coutts, F. K.; Corr, J.; Thompson, K.; Weiss, S.; Proudler, I. K.; and McWhirter, J. G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1633-1637, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MemoryPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760525,\n  author = {F. K. Coutts and J. Corr and K. Thompson and S. Weiss and I. K. Proudler and J. G. McWhirter},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Memory and complexity reduction in parahermitian matrix manipulations of PEVD algorithms},\n  year = {2016},\n  pages = {1633-1637},\n  abstract = {A number of algorithms for the iterative calculation of a polynomial matrix eigenvalue decomposition (PEVD) have been introduced. The PEVD is a generalisation of the ordinary EVD and will diagonalise a parahermitian matrix via paraunitary operations. This paper addresses savings - both computationally and in terms of memory use - that exploit the parahermitian structure of the matrix being decomposed, and also suggests an implicit trimming approach to efficiently curb the polynomial order growth usually observed during iterations of the PEVD algorithms. We demonstrate that with the proposed techniques, both storage and computations can be significantly reduced, impacting on a number of broadband multichannel problems.},\n  keywords = {channel coding;eigenvalues and eigenfunctions;Hermitian matrices;MIMO communication;polynomial matrices;paraHermitian matrix manipulations;iterative calculation;polynomial matrix eigenvalue decomposition;PEVD algorithms;implicit trimming approach;polynomial order growth;broadband multichannel problems;Signal processing algorithms;Approximation algorithms;Covariance matrices;Matrix decomposition;Broadband communication;Memory management;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760525},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256312.pdf},\n}\n\n
\n
\n\n\n
\n A number of algorithms for the iterative calculation of a polynomial matrix eigenvalue decomposition (PEVD) have been introduced. The PEVD is a generalisation of the ordinary EVD and will diagonalise a parahermitian matrix via paraunitary operations. This paper addresses savings - both computationally and in terms of memory use - that exploit the parahermitian structure of the matrix being decomposed, and also suggests an implicit trimming approach to efficiently curb the polynomial order growth usually observed during iterations of the PEVD algorithms. We demonstrate that with the proposed techniques, both storage and computations can be significantly reduced, impacting on a number of broadband multichannel problems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparison of continuous measurement techniques for spatial room impulse responses.\n \n \n \n \n\n\n \n Hahn, N.; and Spors, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1638-1642, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ComparisonPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760526,\n  author = {N. Hahn and S. Spors},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Comparison of continuous measurement techniques for spatial room impulse responses},\n  year = {2016},\n  pages = {1638-1642},\n  abstract = {A large number of spatial room impulse responses can be measured efficiently by using a moving microphone in combination with a time-varying system identification method. The microphone moves on a predefined trajectory and captures the response of the acoustic system which is periodically excited. The instantaneous impulse responses are computed from the captured signal by taking the time-variance explicitly into account. In this paper, three different continuous measurement techniques are investigated and compared in a unified framework. It is shown that impulse response estimation constitutes a spatial interpolation process, where each method corresponds to a specific interpolation filter. In numerical simulations the performance of theses approaches are evaluated in terms of system distance and spatial bandwidth.},\n  keywords = {acoustic signal processing;acoustic variables measurement;architectural acoustics;filtering theory;interpolation;microphones;continuous measurement techniques;spatial room impulse responses;moving microphone;time varying system identification method;impulse response estimation;spatial interpolation process;spatial bandwidth;Interpolation;Microphones;Measurement techniques;Bandwidth;Europe;Signal processing;Acoustic measurements},\n  doi = {10.1109/EUSIPCO.2016.7760526},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252215.pdf},\n}\n\n
\n
\n\n\n
\n A large number of spatial room impulse responses can be measured efficiently by using a moving microphone in combination with a time-varying system identification method. The microphone moves on a predefined trajectory and captures the response of the acoustic system which is periodically excited. The instantaneous impulse responses are computed from the captured signal by taking the time-variance explicitly into account. In this paper, three different continuous measurement techniques are investigated and compared in a unified framework. It is shown that impulse response estimation constitutes a spatial interpolation process, where each method corresponds to a specific interpolation filter. In numerical simulations the performance of theses approaches are evaluated in terms of system distance and spatial bandwidth.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A generalized binaural MVDR beamformer with interferer relative transfer function preservation.\n \n \n \n \n\n\n \n Hadad, E.; Doclo, S.; and Gannot, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1643-1647, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760527,\n  author = {E. Hadad and S. Doclo and S. Gannot},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A generalized binaural MVDR beamformer with interferer relative transfer function preservation},\n  year = {2016},\n  pages = {1643-1647},\n  abstract = {In addition to interference and noise reduction, an important objective of binaural speech enhancement algorithms is the preservation of the binaural cues of both the target and the undesired sound sources. For directional sources, this can be achieved by preserving the relative transfer function (RTF). The recently proposed binaural minimum variance distortionless response (BMVDR) beamformer preserves the RTF of the target, but typically distorts the RTF of the interfering sources. Recently, two extensions of the BMVDR beamformer were proposed preserving the RTFs of both the target and the interferer, namely, the binaural linearly constrained minimum variance (BLCMV) and the BMVDR-RTF beamformers. In this paper, we generalize the BMVDR-RTF to trade off interference reduction and noise reduction. Three special cases of the proposed beamformer are examined, either maximizing the signal-to-interference-and-noise ratio (SINR), the signal-to-noise ratio (SNR), or the signal-to-interference ratio (SIR). Experimental validations in an office environment validate our theoretical results.},\n  keywords = {array signal processing;speech enhancement;transfer functions;generalized binaural MVDR beamformer;interferer relative transfer function preservation;interference reduction;noise reduction;binaural speech enhancement algorithm;binaural cue preservation;binaural minimum variance distortionless response beamformer;BMVDR-RTF beamformer;BLCMV beamformer;binaural linearly constrained minimum variance beamformer;noise reduction;signal-to-interference-and-noise ratio;signal-to-noise ratio;signal-to-interference ratio;SIR;SNR;SINR;Interference;Signal to noise ratio;Noise measurement;Acoustic distortion;Microphones;Cost function},\n  doi = {10.1109/EUSIPCO.2016.7760527},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252245.pdf},\n}\n\n
\n
\n\n\n
\n In addition to interference and noise reduction, an important objective of binaural speech enhancement algorithms is the preservation of the binaural cues of both the target and the undesired sound sources. For directional sources, this can be achieved by preserving the relative transfer function (RTF). The recently proposed binaural minimum variance distortionless response (BMVDR) beamformer preserves the RTF of the target, but typically distorts the RTF of the interfering sources. Recently, two extensions of the BMVDR beamformer were proposed preserving the RTFs of both the target and the interferer, namely, the binaural linearly constrained minimum variance (BLCMV) and the BMVDR-RTF beamformers. In this paper, we generalize the BMVDR-RTF to trade off interference reduction and noise reduction. Three special cases of the proposed beamformer are examined, either maximizing the signal-to-interference-and-noise ratio (SINR), the signal-to-noise ratio (SNR), or the signal-to-interference ratio (SIR). Experimental validations in an office environment validate our theoretical results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reverberation-robust underdetermined source separation with non-negative tensor double deconvolution.\n \n \n \n \n\n\n \n Murata, N.; Kameoka, H.; Kinoshita, K.; Araki, S.; Nakatani, T.; Koyama, S.; and Saruwatari, H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1648-1652, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Reverberation-robustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760528,\n  author = {N. Murata and H. Kameoka and K. Kinoshita and S. Araki and T. Nakatani and S. Koyama and H. Saruwatari},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Reverberation-robust underdetermined source separation with non-negative tensor double deconvolution},\n  year = {2016},\n  pages = {1648-1652},\n  abstract = {Source separation using an ad hoc microphone array can be useful for enhancing speech in such applications as teleconference systems without the need to prepare special devices. However, the positions of the sources (and the microphones when using an ad hoc microphone array) can change during recording, thus violating the commonly made assumption in many source separation algorithms that the mixing system is time-invariant. This paper proposes an extension of the multichannel nonnegative matrix factorization (NMF) approach to deal with the problem of underdetermined source separation in time-variant reverberant environments. The proposed method models the mixing system as a non-negative convolutive mixture based on the concept of a “semi-time-variant system” to handle the reverberation in a room as well allowing for relatively small changes in the source/microphone positions. It also models the power spectrogram of each sound source using the convolutive NMF model to consider the local dynamics of speech.},\n  keywords = {deconvolution;matrix decomposition;microphone arrays;reverberation;source separation;speech enhancement;tensors;reverberation-robust underdetermined source separation;nonnegative tensor double deconvolution;ad hoc microphone array;speech enhancement;mixing system;multichannel nonnegative matrix factorization approach;time-variant reverberant environment;nonnegative convolutive mixture;semitime-variant system concept;power spectrogram;Spectrogram;Microphone arrays;Source separation;Time-frequency analysis;Speech;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760528},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252361.pdf},\n}\n\n
\n
\n\n\n
\n Source separation using an ad hoc microphone array can be useful for enhancing speech in such applications as teleconference systems without the need to prepare special devices. However, the positions of the sources (and the microphones when using an ad hoc microphone array) can change during recording, thus violating the commonly made assumption in many source separation algorithms that the mixing system is time-invariant. This paper proposes an extension of the multichannel nonnegative matrix factorization (NMF) approach to deal with the problem of underdetermined source separation in time-variant reverberant environments. The proposed method models the mixing system as a non-negative convolutive mixture based on the concept of a “semi-time-variant system” to handle the reverberation in a room as well allowing for relatively small changes in the source/microphone positions. It also models the power spectrogram of each sound source using the convolutive NMF model to consider the local dynamics of speech.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Grid size selection for nonlinear least-squares optimisation in spectral estimation and array processing.\n \n \n \n\n\n \n Nielsen, J. K.; Jensen, T. L.; Jensen, J. R.; Christensen, M. G.; and Jensen, S. H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1653-1657, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760529,\n  author = {J. K. Nielsen and T. L. Jensen and J. R. Jensen and M. G. Christensen and S. H. Jensen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Grid size selection for nonlinear least-squares optimisation in spectral estimation and array processing},\n  year = {2016},\n  pages = {1653-1657},\n  abstract = {In many spectral estimation and array processing problems, the process of finding estimates of model parameters often involves the optimisation of a cost function containing multiple peaks and dips. Such non-convex problems are hard to solve using traditional optimisation algorithms developed for convex problems, and computationally intensive grid searches are therefore often used instead. In this paper, we establish an analytical connection between the grid size and the parametrisation of the cost function so that the grid size can be selected as coarsely as possible to lower the computation time. Additionally, we show via three common examples how the grid size depends on parameters such as the number of data points or the number of sensors in DOA estimation. We also demonstrate that the computation time can potentially be lowered by several orders of magnitude by combining a coarse grid search with a local refinement step.},\n  keywords = {array signal processing;direction-of-arrival estimation;least squares approximations;optimisation;grid size selection;nonlinear least-squares optimisation;spectral estimation;array processing;model parameters;cost function;DOA estimation;sensors;coarse grid search;Cost function;Frequency estimation;Maximum likelihood estimation;Computational modeling;Direction-of-arrival estimation;Optimisation;DOA estimation;fundamental frequency estimation;periodogram},\n  doi = {10.1109/EUSIPCO.2016.7760529},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n In many spectral estimation and array processing problems, the process of finding estimates of model parameters often involves the optimisation of a cost function containing multiple peaks and dips. Such non-convex problems are hard to solve using traditional optimisation algorithms developed for convex problems, and computationally intensive grid searches are therefore often used instead. In this paper, we establish an analytical connection between the grid size and the parametrisation of the cost function so that the grid size can be selected as coarsely as possible to lower the computation time. Additionally, we show via three common examples how the grid size depends on parameters such as the number of data points or the number of sensors in DOA estimation. We also demonstrate that the computation time can potentially be lowered by several orders of magnitude by combining a coarse grid search with a local refinement step.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient relative transfer function estimation framework in the spherical harmonics domain.\n \n \n \n \n\n\n \n Biderman, Y.; Rafaely, B.; Gannot, S.; and Doclo, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1658-1662, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760530,\n  author = {Y. Biderman and B. Rafaely and S. Gannot and S. Doclo},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient relative transfer function estimation framework in the spherical harmonics domain},\n  year = {2016},\n  pages = {1658-1662},\n  abstract = {In acoustic conditions with reverberation and coherent sources, various spatial filtering techniques, such as the linearly constrained minimum variance (LCMV) beamformer, require accurate estimates of the relative transfer functions (RTFs) between the sensors with respect to the desired speech source. However, the time-domain support of these RTFs may affect the estimation accuracy in several ways. First, short RTFs justify the multiplicative transfer function (MTF) assumption when the length of the signal time frames is limited. Second, they require fewer parameters to be estimated, hence reducing the effect of noise and model errors. In this paper, a spherical microphone array based framework for RTF estimation is presented, where the signals are transformed to the spherical harmonics (SH)-domain. The RTF time-domain supports are studied under different acoustic conditions, showing that SH-domain RTFs are shorter compared to conventional space-domain RTFs.},\n  keywords = {array signal processing;harmonics;microphone arrays;reverberation;spatial filters;time-domain analysis;transfer functions;model error reduction;spherical microphone array-based framework;noise effect reduction;MTF;multiplicative transfer function;RTF time-domain;relative transfer function estimation framework;spatial filtering technique;coherent source;reverberation;spherical harmonics domain;Time-domain analysis;Acoustics;Transfer functions;Estimation;Microphone arrays;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760530},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256082.pdf},\n}\n\n
\n
\n\n\n
\n In acoustic conditions with reverberation and coherent sources, various spatial filtering techniques, such as the linearly constrained minimum variance (LCMV) beamformer, require accurate estimates of the relative transfer functions (RTFs) between the sensors with respect to the desired speech source. However, the time-domain support of these RTFs may affect the estimation accuracy in several ways. First, short RTFs justify the multiplicative transfer function (MTF) assumption when the length of the signal time frames is limited. Second, they require fewer parameters to be estimated, hence reducing the effect of noise and model errors. In this paper, a spherical microphone array based framework for RTF estimation is presented, where the signals are transformed to the spherical harmonics (SH)-domain. The RTF time-domain supports are studied under different acoustic conditions, showing that SH-domain RTFs are shorter compared to conventional space-domain RTFs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Frequency-domain blind speech separation using incomplete de-mixing transform.\n \n \n \n \n\n\n \n Koldovský, Z.; Nesta, F.; Tichavský, P.; and Ono, N.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1663-1667, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Frequency-domainPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760531,\n  author = {Z. Koldovský and F. Nesta and P. Tichavský and N. Ono},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Frequency-domain blind speech separation using incomplete de-mixing transform},\n  year = {2016},\n  pages = {1663-1667},\n  abstract = {We propose a novel solution to the blind speech separation problem where the de-mixing transform is estimated only within selected frequency bins. This solution is based on Independent Vector Analysis applied to a subset of instantaneous mixtures, one per selected frequency bin. Next, two approaches are proposed to complete the transform: one based on null beamforming, and the other based on convex programming. In subsequent experiments, we compare combinations of both methods and evaluate their ability to retrieve the whole de-mixing transform. Depending on the number of selected frequencies and the sparsity of room impulse responses, the methods show improvements in terms of computational complexity as well as in terms of separation accuracy.},\n  keywords = {blind source separation;computational complexity;convex programming;frequency-domain analysis;speech processing;transforms;vectors;separation accuracy;computational complexity;room impulse responses;convex programming;null beamforming;instantaneous mixtures;independent vector analysis;frequency bins;incomplete de-mixing transform;frequency-domain blind speech separation;Transforms;Speech;Frequency-domain analysis;Microphones;Signal to noise ratio;Europe;Blind Source Separation;Independent Vector Analysis;Relative Transfer Function;Sparse Reconstruction;Convex Optimization},\n  doi = {10.1109/EUSIPCO.2016.7760531},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256218.pdf},\n}\n\n
\n
\n\n\n
\n We propose a novel solution to the blind speech separation problem where the de-mixing transform is estimated only within selected frequency bins. This solution is based on Independent Vector Analysis applied to a subset of instantaneous mixtures, one per selected frequency bin. Next, two approaches are proposed to complete the transform: one based on null beamforming, and the other based on convex programming. In subsequent experiments, we compare combinations of both methods and evaluate their ability to retrieve the whole de-mixing transform. Depending on the number of selected frequencies and the sparsity of room impulse responses, the methods show improvements in terms of computational complexity as well as in terms of separation accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Horizontal plane HRTF interpolation using linear phase constraint for rendering spatial audio.\n \n \n \n \n\n\n \n Reddy, C. S.; and Hegde, R. M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1668-1672, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"HorizontalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760532,\n  author = {C. S. Reddy and R. M. Hegde},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Horizontal plane HRTF interpolation using linear phase constraint for rendering spatial audio},\n  year = {2016},\n  pages = {1668-1672},\n  abstract = {In this paper, a novel method of horizontal plane Head Related Transfer Function (HRTF) interpolation is proposed. As Interaural Time Difference (ITD) manifests itself as Interaural phase difference in spectral domain, linear phase constraints are imposed to compute the interpolated HRTFs. Hence the phase of the interpolated HRTF is constrained as a weighted linear combination of the phases of adjacent HRTFs. A weighted l2 norm error minimization is performed using these linear phase constraints. The performance of the proposed HRTF interpolation method is evaluated by computing Root Mean Squared Error (RMSE) between the interpolated HRTF and ground truth HRTF obtained from the CIPIC and SYMARE databases. Subjective evaluation is performed on the binaural audio that is rendered using interpolated HRTFs. It is noted from both RMSE and subjective evaluations that the proposed method performs reasonably better than the conventional HRTF interpolation methods.},\n  keywords = {audio signal processing;interpolation;mean square error methods;rendering (computer graphics);transfer functions;horizontal plane HRTF interpolation method;linear phase constraint;spatial audio rendering;horizontal plane head related transfer function interpolation method;ITD;interaural time difference;spectral domain;interaural phase difference;weighted l2 norm error minimization;root mean squared error method;RMSE method;ground truth HRTF;CIPIC database;SYMARE database;binaural audio;Interpolation;Databases;Minimization;Delays;Azimuthal angle;Europe;Signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760532},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256227.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a novel method of horizontal plane Head Related Transfer Function (HRTF) interpolation is proposed. As Interaural Time Difference (ITD) manifests itself as Interaural phase difference in spectral domain, linear phase constraints are imposed to compute the interpolated HRTFs. Hence the phase of the interpolated HRTF is constrained as a weighted linear combination of the phases of adjacent HRTFs. A weighted l2 norm error minimization is performed using these linear phase constraints. The performance of the proposed HRTF interpolation method is evaluated by computing Root Mean Squared Error (RMSE) between the interpolated HRTF and ground truth HRTF obtained from the CIPIC and SYMARE databases. Subjective evaluation is performed on the binaural audio that is rendered using interpolated HRTFs. It is noted from both RMSE and subjective evaluations that the proposed method performs reasonably better than the conventional HRTF interpolation methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Capturing and reproduction of a crowded sound scene using a circular microphone array.\n \n \n \n \n\n\n \n Stefanakis, N.; and Mouchtaris, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1673-1677, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CapturingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760533,\n  author = {N. Stefanakis and A. Mouchtaris},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Capturing and reproduction of a crowded sound scene using a circular microphone array},\n  year = {2016},\n  pages = {1673-1677},\n  abstract = {Over the years, different spatial audio techniques have been proposed as the means to capture, encode and reproduce the spatial properties of acoustic fields, yet specific issues need to be modified each time in accordance to the type of microphone array used as well as with the technology used for reproduction. Using a circular array of omnidirectional microphones, we formulate in this paper a parametric and a non-parametric approach for capturing and reproduction of the crowded acoustic environment of a football stadium. A listening test performed reveals the advantages and disadvantages of each approach in connection to the particularities of the acoustic environment.},\n  keywords = {acoustic signal processing;array signal processing;microphone arrays;crowded sound scene;circular omnidirectional microphone array;spatial audio technique;acoustic field spatial properties;parametric approach;nonparametric approach;crowded acoustic environment;football stadium;Loudspeakers;Sensor arrays;Microphone arrays;Acoustics;Direction-of-arrival estimation;Array signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760533},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256291.pdf},\n}\n\n
\n
\n\n\n
\n Over the years, different spatial audio techniques have been proposed as the means to capture, encode and reproduce the spatial properties of acoustic fields, yet specific issues need to be modified each time in accordance to the type of microphone array used as well as with the technology used for reproduction. Using a circular array of omnidirectional microphones, we formulate in this paper a parametric and a non-parametric approach for capturing and reproduction of the crowded acoustic environment of a football stadium. A listening test performed reveals the advantages and disadvantages of each approach in connection to the particularities of the acoustic environment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compressive sensing based energy detector.\n \n \n \n \n\n\n \n Lagunas, E.; Sharma, S. K.; Chatzinotas, S.; and Ottersten, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1678-1682, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CompressivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760534,\n  author = {E. Lagunas and S. K. Sharma and S. Chatzinotas and B. Ottersten},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Compressive sensing based energy detector},\n  year = {2016},\n  pages = {1678-1682},\n  abstract = {While most of research in Compressive Sensing (CS) has been focused on reconstruction of the sparse signal from very fewer samples than the ones required by the Shannon-Nyquist sampling theorem, recently there has been a growing interest in performing signal processing directly in the measurement domain. This new area of research is known in the literature as Compressive Signal Processing (CSP). In this paper, we consider the detection problem using a reduced set of measurements focusing on the Energy Detector (ED), which is the optimal Neyman-Pearson (NP) detector for random signals in Gaussian noise. In particular, we provide simple closed form expressions for evaluating the detection performance of ED when considering compressive measurements. The resulting equations reflect the loss due to CS and allow to determine the minimum number of samples to achieve certain detection performance.},\n  keywords = {compressed sensing;Gaussian noise;compressive sensing;energy detector;sparse signal reconstruction;measurement domain;compressive signal processing;optimal Neyman-Pearson detector;random signals;Gaussian noise;compressive measurements;Detectors;Probability;Signal to noise ratio;Europe;Compressed sensing;Signal detection;Compressive Signal Processing;Compressive Sensing;Energy Detection},\n  doi = {10.1109/EUSIPCO.2016.7760534},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256092.pdf},\n}\n\n
\n
\n\n\n
\n While most of research in Compressive Sensing (CS) has been focused on reconstruction of the sparse signal from very fewer samples than the ones required by the Shannon-Nyquist sampling theorem, recently there has been a growing interest in performing signal processing directly in the measurement domain. This new area of research is known in the literature as Compressive Signal Processing (CSP). In this paper, we consider the detection problem using a reduced set of measurements focusing on the Energy Detector (ED), which is the optimal Neyman-Pearson (NP) detector for random signals in Gaussian noise. In particular, we provide simple closed form expressions for evaluating the detection performance of ED when considering compressive measurements. The resulting equations reflect the loss due to CS and allow to determine the minimum number of samples to achieve certain detection performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n SAR imaging using the sparse fourier transform.\n \n \n \n \n\n\n \n Yang, X. V.; and Petropulu, A. P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1683-1687, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SARPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760535,\n  author = {X. V. Yang and A. P. Petropulu},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {SAR imaging using the sparse fourier transform},\n  year = {2016},\n  pages = {1683-1687},\n  abstract = {In wide-bandwidth high-resolution synthetic aperture radar (SAR), high sampling rates generate big demands for computations and storage. This paper exploits the sparsity of the electromagnetic reflectivity of far-field targets in the range-azimuth domain to propose a sparse Fourier transform (SFT) based ranged Doppler (RD) algorithm for SAR imaging. The proposed algorithm ensures the same resolution as the RD algorithm with computational complexity O(K log2 K), where K is of the order of the target scene sparsity, while employing only O(K log2 N) samples in azimuth direction and O(K log2 Nt) in range direction, where N and Nt denote the number of Nyquist sampling points in azimuth and range direction, respectively.},\n  keywords = {Fourier transforms;radar imaging;synthetic aperture radar;SAR imaging;sparse Fourier transform;wide-bandwidth synthetic aperture radar;high-resolution synthetic aperture radar;electromagnetic reflectivity;far-field targets;range-azimuth domain;ranged Doppler algorithm;computational complexity;target scene sparsity;Nyquist sampling points;Azimuth;Synthetic aperture radar;Imaging;Fourier transforms;Radar polarimetry;Image resolution},\n  doi = {10.1109/EUSIPCO.2016.7760535},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256465.pdf},\n}\n\n
\n
\n\n\n
\n In wide-bandwidth high-resolution synthetic aperture radar (SAR), high sampling rates generate big demands for computations and storage. This paper exploits the sparsity of the electromagnetic reflectivity of far-field targets in the range-azimuth domain to propose a sparse Fourier transform (SFT) based ranged Doppler (RD) algorithm for SAR imaging. The proposed algorithm ensures the same resolution as the RD algorithm with computational complexity O(K log2 K), where K is of the order of the target scene sparsity, while employing only O(K log2 N) samples in azimuth direction and O(K log2 Nt) in range direction, where N and Nt denote the number of Nyquist sampling points in azimuth and range direction, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optic cup characterization through sparse representation and dictionary learning.\n \n \n \n \n\n\n \n Naranjo, V.; Saez, C. J.; Morales, S.; Engan, K.; and Gómez, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1688-1692, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OpticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760536,\n  author = {V. Naranjo and C. J. Saez and S. Morales and K. Engan and S. Gómez},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Optic cup characterization through sparse representation and dictionary learning},\n  year = {2016},\n  pages = {1688-1692},\n  abstract = {This paper describes how to construct a probability map using sparse representation and dictionary learning to indicate the probability of each optic disk pixel of belonging to the optic cup. This probability map will be used in the future as input to a method for automatically detecting glaucoma from color fundus images. The probability map was obtained constructing a model (using the Bayes classifier) which takes into account texture information, by means of sparse representation and RLS-DLA dictionary learning technique, and intensity information. Several experiments on a private database are presented in this work. The results are compared with the segmentation made by specialists, highlighting the promising performance of this technique in difficult cases where the optic cup is barely visible.},\n  keywords = {Bayes methods;signal processing;optic cup characterization;sparse representation;probability map;optic disk pixel;color fundus images;Bayes classifier;RLS-DLA dictionary learning technique;private database;Dictionaries;Optical imaging;Optical signal processing;Training;Adaptive optics;Feature extraction;Image segmentation},\n  doi = {10.1109/EUSIPCO.2016.7760536},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251903.pdf},\n}\n\n
\n
\n\n\n
\n This paper describes how to construct a probability map using sparse representation and dictionary learning to indicate the probability of each optic disk pixel of belonging to the optic cup. This probability map will be used in the future as input to a method for automatically detecting glaucoma from color fundus images. The probability map was obtained constructing a model (using the Bayes classifier) which takes into account texture information, by means of sparse representation and RLS-DLA dictionary learning technique, and intensity information. Several experiments on a private database are presented in this work. The results are compared with the segmentation made by specialists, highlighting the promising performance of this technique in difficult cases where the optic cup is barely visible.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Super-resolution/segmentation of 2D trabecular bone images by a Mumford-Shah approach and comparison to total variation.\n \n \n \n \n\n\n \n Li, Y.; Toma, A.; Sixou, B.; and Peyrin, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1693-1697, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Super-resolution/segmentationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760537,\n  author = {Y. Li and A. Toma and B. Sixou and F. Peyrin},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Super-resolution/segmentation of 2D trabecular bone images by a Mumford-Shah approach and comparison to total variation},\n  year = {2016},\n  pages = {1693-1697},\n  abstract = {The analysis of trabecular bone micro structure from in-vivo CT images is still limited due to insufficient spatial resolution. The goal of this work is to address both the problem of increasing the resolution of the image and of the segmentation of the bone structure. To this aim, we investigate the joint super-resolution/segmentation problem by an approach based on the Mumford-Shah model. The validation of the method is performed on blurred, noisy and down-sampled images. A comparison of the reconstruction results with the Total Variation regularization is showed.},\n  keywords = {bone;computerised tomography;image resolution;image segmentation;medical image processing;image super-resolution;image segmentation;2D trabecular bone images;Mumford-Shah approach;trabecular bone microstructure;in-vivo CT images;spatial resolution;total variation regularization;Bones;Spatial resolution;TV;Image reconstruction;Signal resolution;Image segmentation;Super-resolution/segmentation;Mumford-Shah;total variation;alternating minimization;3D CT image;bone micro-architecture},\n  doi = {10.1109/EUSIPCO.2016.7760537},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251916.pdf},\n}\n\n
\n
\n\n\n
\n The analysis of trabecular bone micro structure from in-vivo CT images is still limited due to insufficient spatial resolution. The goal of this work is to address both the problem of increasing the resolution of the image and of the segmentation of the bone structure. To this aim, we investigate the joint super-resolution/segmentation problem by an approach based on the Mumford-Shah model. The validation of the method is performed on blurred, noisy and down-sampled images. A comparison of the reconstruction results with the Total Variation regularization is showed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-level tomography reconstructions with level-set and TV regularization methods.\n \n \n \n \n\n\n \n Wang, L.; Sixou, B.; and Peyrin, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1698-1702, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-levelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760538,\n  author = {L. Wang and B. Sixou and F. Peyrin},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-level tomography reconstructions with level-set and TV regularization methods},\n  year = {2016},\n  pages = {1698-1702},\n  abstract = {The discrete tomographic reconstruction problem is generally considered for binary image. In this work, we consider the reconstruction of an image with more than two grey levels and compare two reconstruction methods. The first one is based on a classical TV regularization and the second one is a level-set regularization method. In this second method, the discrete tomographic problem is formulated as a shape optimization problem with several level-set functions and regularized with Total Variation-Sobolev terms. The two methods are applied to an image size of 128 × 128, with several additive Gaussian noises on the raw projection data and several number of projections.},\n  keywords = {Gaussian noise;image colour analysis;image reconstruction;optimisation;tomography;multilevel tomography reconstructions;total variation-Sobolev terms;TV regularization methods;discrete tomographic reconstruction problem;binary image;grey levels;level-set regularization method;shape optimization problem;level-set functions;additive Gaussian noises;Image reconstruction;TV;Tomography;Signal processing algorithms;Noise level;Inverse problems;Europe;X-ray imaging;TV regularization;discrete tomography;level-set regularization;inverse problems},\n  doi = {10.1109/EUSIPCO.2016.7760538},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251936.pdf},\n}\n\n
\n
\n\n\n
\n The discrete tomographic reconstruction problem is generally considered for binary image. In this work, we consider the reconstruction of an image with more than two grey levels and compare two reconstruction methods. The first one is based on a classical TV regularization and the second one is a level-set regularization method. In this second method, the discrete tomographic problem is formulated as a shape optimization problem with several level-set functions and regularized with Total Variation-Sobolev terms. The two methods are applied to an image size of 128 × 128, with several additive Gaussian noises on the raw projection data and several number of projections.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Semi-automatic detection of calcified plaque in coronary CT angiograms with 320-MSCT.\n \n \n \n \n\n\n \n Yoshida, Y.; Fujisaku, K.; Sasaki, K.; Yuasa, T.; and Shibuya, K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1703-1707, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Semi-automaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760539,\n  author = {Y. Yoshida and K. Fujisaku and K. Sasaki and T. Yuasa and K. Shibuya},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Semi-automatic detection of calcified plaque in coronary CT angiograms with 320-MSCT},\n  year = {2016},\n  pages = {1703-1707},\n  abstract = {Coronary CT angiography (Coronary Computed Tomography Angiography) by MSCT (Multi-Slice Computed Tomography) offers not only a diagnostic capability matching that of CAG (Coronary Angiography) that is the gold standard of the cardiovascular diagnosis, but also a much less invasive examination. However, if calcified plaque adheres to a vessel wall, high brightness shading due to calcium in the calcified plaque, called the blooming artefacts, causes to make it difficult to diagnose the region around the plaque. In this study, we propose a method to semi-automatically detect and remove calcified plaques, which hinder diagnosing a stenosed coronary artery, from a CCTA image. In addition, an analyzing method to accurately and objectively measure angiostenosis rate is provided.},\n  keywords = {cardiovascular system;computerised tomography;medical image processing;object detection;blooming artefacts;vessel wall;cardiovascular diagnosis;MultiSlice Computed Tomography;Coronary Computed Tomography Angiography;coronary CT angiograms;calcified plaque;semiautomatic detection;Computed tomography;Arteries;Eigenvalues and eigenfunctions;Angiography;Europe;MSCT (multi-slice computed tomography);CTA (computed tomographic angiography);coronary artery;calcified plaque},\n  doi = {10.1109/EUSIPCO.2016.7760539},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251968.pdf},\n}\n\n
\n
\n\n\n
\n Coronary CT angiography (Coronary Computed Tomography Angiography) by MSCT (Multi-Slice Computed Tomography) offers not only a diagnostic capability matching that of CAG (Coronary Angiography) that is the gold standard of the cardiovascular diagnosis, but also a much less invasive examination. However, if calcified plaque adheres to a vessel wall, high brightness shading due to calcium in the calcified plaque, called the blooming artefacts, causes to make it difficult to diagnose the region around the plaque. In this study, we propose a method to semi-automatically detect and remove calcified plaques, which hinder diagnosing a stenosed coronary artery, from a CCTA image. In addition, an analyzing method to accurately and objectively measure angiostenosis rate is provided.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An unsupervised spatio-temporal regularization for perfusion MRI deconvolution in acute stroke.\n \n \n \n \n\n\n \n Giacalone, M.; Frindel, C.; and Rousseau, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1708-1712, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760540,\n  author = {M. Giacalone and C. Frindel and D. Rousseau},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {An unsupervised spatio-temporal regularization for perfusion MRI deconvolution in acute stroke},\n  year = {2016},\n  pages = {1708-1712},\n  abstract = {We consider the ill-posed inverse problem encountered in perfusion magnetic resonance imaging (MRI) analysis due to the necessity of eliminating, via a deconvolution process, the imprint of the arterial input function on the MR signals. Until recently, this deconvolution process was realized independently voxel by voxel with a sole temporal regularization despite the knowledge that the ischemic lesion in acute stroke can reasonably be considered piecewise continuous. A new promising algorithm incorporating a spatial regularization to avoid spurious spatial artifacts and preserve the shape of the lesion was introduced [1]. So far, the optimization of the spatio-temporal regularization parameters of the deconvolution algorithm was supervised. In this communication, we evaluate the potential of the L-hypersurface method in selecting the spatio-temporal regularization parameters in an unsupervised way and discuss the possibility of automating this method. This is demonstrated quantitatively with an in silico approach using digital phantoms simulated with realistic lesion shapes.},\n  keywords = {biomedical MRI;blood vessels;inverse problems;medical disorders;medical image processing;L-hypersurface method;deconvolution algorithm;ischemic lesion;MR signal;arterial input function;perfusion MRI deconvolution;perfusion magnetic resonance imaging;inverse problem;acute stroke;Deconvolution;Shape;Magnetic resonance imaging;Cost function;Lesions;Signal processing algorithms},\n  doi = {10.1109/EUSIPCO.2016.7760540},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252084.pdf},\n}\n\n
\n
\n\n\n
\n We consider the ill-posed inverse problem encountered in perfusion magnetic resonance imaging (MRI) analysis due to the necessity of eliminating, via a deconvolution process, the imprint of the arterial input function on the MR signals. Until recently, this deconvolution process was realized independently voxel by voxel with a sole temporal regularization despite the knowledge that the ischemic lesion in acute stroke can reasonably be considered piecewise continuous. A new promising algorithm incorporating a spatial regularization to avoid spurious spatial artifacts and preserve the shape of the lesion was introduced [1]. So far, the optimization of the spatio-temporal regularization parameters of the deconvolution algorithm was supervised. In this communication, we evaluate the potential of the L-hypersurface method in selecting the spatio-temporal regularization parameters in an unsupervised way and discuss the possibility of automating this method. This is demonstrated quantitatively with an in silico approach using digital phantoms simulated with realistic lesion shapes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection of sEMG muscle activation intervals using Gaussian Mixture Model and Ant Colony Classifier.\n \n \n \n \n\n\n \n Naseem, A.; Jabloun, M.; Buttelli, O.; and Ravier, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1713-1717, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DetectionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760541,\n  author = {A. Naseem and M. Jabloun and O. Buttelli and P. Ravier},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Detection of sEMG muscle activation intervals using Gaussian Mixture Model and Ant Colony Classifier},\n  year = {2016},\n  pages = {1713-1717},\n  abstract = {A new efficient and user-independent technique for the detection of muscle activation (MA) intervals is proposed based on Gaussian Mixture Model (GMM) and Ant Colony Classifier (AntCC). First, time and frequency features are extracted from the surface electromyography (sEMG) signals. Then, GMM is used to cluster these extracted features into burst & non burst. Those features with their class name are then used as the input for the AntCC algorithm in order to derive classification rules. Finally, the obtained rules are used to detect sEMG activation timing of human skeletal muscles during movement. The performance of the proposed technique is demonstrated by means of synthetic simulated sEMG signals and real ones. The proposed technique is then compared to two previously published techniques: wavelet transform-based method [1] & double threshold-based method [2]. It is concluded that our technique outperforms those methods and significantly improves the accuracy of good MA timing detection. Moreover, to our knowledge, the proposed technique is the first user-independent one since no tuning parameters are required. Our findings show that the proposed method is convenient for automatically processing large amounts of sEMG signals with performance beyond that of the state of the-art methods.},\n  keywords = {ant colony optimisation;electromyography;feature extraction;Gaussian processes;medical signal processing;mixture models;muscle;signal classification;time-frequency analysis;sEMG muscle activation interval detection;Gaussian mixture model;ant colony classifier;user-independent technique;MA interval detection;GMM;time-frequency feature extraction;surface electromyography signal;AntCC algorithm;human skeletal muscle sEMG activation timing detection;MA timing detection;Feature extraction;Timing;Signal processing algorithms;Muscles;Discrete wavelet transforms;Signal processing;Gaussian mixture model;Activation Timing Detection;Onset detection;Timing Off Detection;Ant Colony Classifier (AntCC);Gaussian mixture model (GMM)},\n  doi = {10.1109/EUSIPCO.2016.7760541},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252092.pdf},\n}\n\n
\n
\n\n\n
\n A new efficient and user-independent technique for the detection of muscle activation (MA) intervals is proposed based on Gaussian Mixture Model (GMM) and Ant Colony Classifier (AntCC). First, time and frequency features are extracted from the surface electromyography (sEMG) signals. Then, GMM is used to cluster these extracted features into burst & non burst. Those features with their class name are then used as the input for the AntCC algorithm in order to derive classification rules. Finally, the obtained rules are used to detect sEMG activation timing of human skeletal muscles during movement. The performance of the proposed technique is demonstrated by means of synthetic simulated sEMG signals and real ones. The proposed technique is then compared to two previously published techniques: wavelet transform-based method [1] & double threshold-based method [2]. It is concluded that our technique outperforms those methods and significantly improves the accuracy of good MA timing detection. Moreover, to our knowledge, the proposed technique is the first user-independent one since no tuning parameters are required. Our findings show that the proposed method is convenient for automatically processing large amounts of sEMG signals with performance beyond that of the state of the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive quadratic regularization for baseline wandering removal in wearable ECG devices.\n \n \n \n \n\n\n \n Argenti, F.; Facheris, L.; and Giarré, L.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1718-1722, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760542,\n  author = {F. Argenti and L. Facheris and L. Giarré},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive quadratic regularization for baseline wandering removal in wearable ECG devices},\n  year = {2016},\n  pages = {1718-1722},\n  abstract = {The electrocardiogram (ECG) is one of the most important physiological signals to monitor the health status of a patient. Technological advances allow the size and weight of ECG acquisition devices to be strongly reduced so that wearable systems are now available, even though the computational power and memory capacity is generally limited. An ECG signal is affected by several artifacts, among which the baseline wandering (BW), i.e., a slowly varying variation of its trend, represents a major disturbance. Several algorithms for BW removal have been proposed in the literature. In this paper, we propose new methods to face the problem that require low computational and memory resources and thus well comply with a wearable device implementation.},\n  keywords = {electrocardiography;medical signal detection;patient monitoring;adaptive quadratic regularization;baseline wandering removal;wearable ECG devices;electrocardiogram;physiological signals;patient health status monitoring;ECG acquisition devices;memory capacity;ECG signal;BW removal;memory resources;computational resources;Electrocardiography;Biomedical monitoring;Signal processing algorithms;Europe;Signal processing;Market research;Computational modeling},\n  doi = {10.1109/EUSIPCO.2016.7760542},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252096.pdf},\n}\n\n
\n
\n\n\n
\n The electrocardiogram (ECG) is one of the most important physiological signals to monitor the health status of a patient. Technological advances allow the size and weight of ECG acquisition devices to be strongly reduced so that wearable systems are now available, even though the computational power and memory capacity is generally limited. An ECG signal is affected by several artifacts, among which the baseline wandering (BW), i.e., a slowly varying variation of its trend, represents a major disturbance. Several algorithms for BW removal have been proposed in the literature. In this paper, we propose new methods to face the problem that require low computational and memory resources and thus well comply with a wearable device implementation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Characterizing Parkinson's desease using EMG fractional linear prediction.\n \n \n \n \n\n\n \n Ravier, P.; Jabloun, M.; Talbi, M. L.; Parry, R.; Lalo, E.; and Buttelli, O.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1723-1727, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CharacterizingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760543,\n  author = {P. Ravier and M. Jabloun and M. L. Talbi and R. Parry and E. Lalo and O. Buttelli},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Characterizing Parkinson's desease using EMG fractional linear prediction},\n  year = {2016},\n  pages = {1723-1727},\n  abstract = {In this paper, we propose a modeling technique for the surface electromyographic (sEMG) signals based on the fractional linear prediction (FLP). To our knowledge, this is the first time application (use) of the FLP modeling to sEMG Data. This study is motivated by the ability of FLP modeling for characterizing a waveform with a reduced set of parameters. The FLP is applied on real sEMG data recorded on the Soleus muscles under walking conditions and preliminary results are obtained. The dynamics of FLP coefficients for persons suffering from Parkinson's disease (PD) were shown to be lower than those for healthy subjects. This suggests less adjustment possibilities in the neuromuscular response of the PD subjects compared to the healthy subjects. Perspectives are the evaluation of fractal components of nonstationary EMG data in connection with FLP evolution.},\n  keywords = {diseases;electromyography;fractals;gait analysis;Parkinson desease characterization;EMG fractional linear prediction;surface electromyographic signals;waveform characterization;soleus muscles;walking conditions;neuromuscular response;fractal component evaluation;nonstationary EMG data;Legged locomotion;Data models;Brain modeling;Electromyography;Muscles;Parkinson's disease;Mathematical model;EMG;Fractional linear prediction FLP modeling;Parkinson's desease},\n  doi = {10.1109/EUSIPCO.2016.7760543},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252242.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a modeling technique for the surface electromyographic (sEMG) signals based on the fractional linear prediction (FLP). To our knowledge, this is the first time application (use) of the FLP modeling to sEMG Data. This study is motivated by the ability of FLP modeling for characterizing a waveform with a reduced set of parameters. The FLP is applied on real sEMG data recorded on the Soleus muscles under walking conditions and preliminary results are obtained. The dynamics of FLP coefficients for persons suffering from Parkinson's disease (PD) were shown to be lower than those for healthy subjects. This suggests less adjustment possibilities in the neuromuscular response of the PD subjects compared to the healthy subjects. Perspectives are the evaluation of fractal components of nonstationary EMG data in connection with FLP evolution.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n High throughput architecture for inpainting-based recovery of correlated neural signals.\n \n \n \n \n\n\n \n Schmale, S.; Rust, J.; Hülsmeier, N.; Lange, H.; Knoop, B.; and Paul, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1728-1732, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"HighPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760544,\n  author = {S. Schmale and J. Rust and N. Hülsmeier and H. Lange and B. Knoop and S. Paul},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {High throughput architecture for inpainting-based recovery of correlated neural signals},\n  year = {2016},\n  pages = {1728-1732},\n  abstract = {This paper presents the first hardware architecture for compressing and reconstructing correlated neural signals using structure-based inpainting. This novel methodology is especially important for the realization of implantable neural measurement systems (NMS), which are subject to strict constraints in terms of area and energy consumption. Such an implant only requires a defined controlling of the electrode activity to compress neural data. To achieve an efficient implementation with high throughput at the data recovery, approximately computation of arithmetic operations and elementary functions is proposed by using the logarithmic number system (LNS). Because of the digital quantization effects of the LNS conversions, an inherent thresholding operation arises. The proposed hardware realization significantly reduces the required iteration of inpainting computations. This inherent zero forcing in conjunction with the algorithmic error correction results in a speed-up in terms of neural signal recovery, which results in a throughput of 32 961 parallel reconstructions per second.},\n  keywords = {biomedical electrodes;data compression;medical signal processing;microelectrodes;neurophysiology;prosthetics;signal reconstruction;high throughput architecture;inpainting-based recovery;correlated neural signals;hardware architecture;correlated neural signal compression;correlated neural signal reconstruction;structure-based inpainting;implantable neural measurement systems;NMS;electrode activity;neural data;data recovery;arithmetic operation;elementary functions;logarithmic number system;digital quantization effects;LNS conversion;inherent thresholding operation;hardware realization;inpainting computations;inherent zero forcing;algorithmic error correction;neural signal recovery;parallel reconstructions;Hardware;Electrodes;Implants;Signal processing;Computer architecture;Function approximation;Brain},\n  doi = {10.1109/EUSIPCO.2016.7760544},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252251.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents the first hardware architecture for compressing and reconstructing correlated neural signals using structure-based inpainting. This novel methodology is especially important for the realization of implantable neural measurement systems (NMS), which are subject to strict constraints in terms of area and energy consumption. Such an implant only requires a defined controlling of the electrode activity to compress neural data. To achieve an efficient implementation with high throughput at the data recovery, approximately computation of arithmetic operations and elementary functions is proposed by using the logarithmic number system (LNS). Because of the digital quantization effects of the LNS conversions, an inherent thresholding operation arises. The proposed hardware realization significantly reduces the required iteration of inpainting computations. This inherent zero forcing in conjunction with the algorithmic error correction results in a speed-up in terms of neural signal recovery, which results in a throughput of 32 961 parallel reconstructions per second.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Human expert supervised selection of time-frequency intervals in EEG signals for brain-Computer interfacing.\n \n \n \n \n\n\n \n Duprès, A.; Cabestaing, F.; and Rouillard, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1733-1737, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"HumanPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760545,\n  author = {A. Duprès and F. Cabestaing and J. Rouillard},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Human expert supervised selection of time-frequency intervals in EEG signals for brain-Computer interfacing},\n  year = {2016},\n  pages = {1733-1737},\n  abstract = {In the context of brain-computer interfacing based on motor imagery, we propose a method allowing a human expert to supervise the selection of user-specific time-frequency features computed from EEG signals. Indeed, in the current state of BCI research, there is always at least one expert involved in the first stages of any experimentation. On one hand, such experts really appreciate keeping a certain level of control on the tuning of user-specific parameters. On the other hand, we will show that their knowledge is extremely valuable for selecting a sparse set of significant time-frequency features. The expert selects these features through a visual analysis of curves highlighting differences between electroencephalographic activities recorded during the execution of various motor imagery tasks. We compare our method to the basic common spatial patterns approach and to two fully-automatic feature extraction methods, using dataset 2A of BCI competition IV. Our method (mean accuracy m = 83.71 ± 14.6 std) outperforms the best competing method (m = 79.48 ± 12.41 std) for 6 of the 9 subjects.},\n  keywords = {brain-computer interfaces;compressed sensing;electroencephalography;feature extraction;feature selection;medical signal processing;human expert supervised selection;time-frequency interval;electroencephalographic signal;EEG signal;brain-computer interfacing;BCI;feature selection;feature extraction;visual analysis;sparse set;Electroencephalography;Time-frequency analysis;Training;Pipelines;Signal processing;Erbium;Laplace equations;brain-computer interface;EEG signal processing;sparse feature set;feature selection;human expertise},\n  doi = {10.1109/EUSIPCO.2016.7760545},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252262.pdf},\n}\n\n
\n
\n\n\n
\n In the context of brain-computer interfacing based on motor imagery, we propose a method allowing a human expert to supervise the selection of user-specific time-frequency features computed from EEG signals. Indeed, in the current state of BCI research, there is always at least one expert involved in the first stages of any experimentation. On one hand, such experts really appreciate keeping a certain level of control on the tuning of user-specific parameters. On the other hand, we will show that their knowledge is extremely valuable for selecting a sparse set of significant time-frequency features. The expert selects these features through a visual analysis of curves highlighting differences between electroencephalographic activities recorded during the execution of various motor imagery tasks. We compare our method to the basic common spatial patterns approach and to two fully-automatic feature extraction methods, using dataset 2A of BCI competition IV. Our method (mean accuracy m = 83.71 ± 14.6 std) outperforms the best competing method (m = 79.48 ± 12.41 std) for 6 of the 9 subjects.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Implementation of efficient real-time industrial wireless interference identification algorithms with fuzzified neural networks.\n \n \n \n \n\n\n \n Block, D.; Töws, D.; and Meier, U.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1738-1742, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ImplementationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760546,\n  author = {D. Block and D. Töws and U. Meier},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Implementation of efficient real-time industrial wireless interference identification algorithms with fuzzified neural networks},\n  year = {2016},\n  pages = {1738-1742},\n  abstract = {Real-time industrial wireless systems sharing a crowded spectrum band require active coexistence management measures. Identification of wireless interference is a key issue for this purpose. We propose an efficient implementation of a wireless interference identification (WII) approach called neuro-fuzzy signal classifier (NFSC). The implementation in Matlab / SIMULINK is based upon the wideband software defined radio Ettus USRP N210. The implementation is evaluated in six selected heterogeneous and harsh industrial scenarios within the license-free 2.4-GHz-ISM radio band with variously combined standard wireless technologies IEEE 802.11g-based WLAN and Bluetooth. The evaluation of the NFSC was performed with a binary classification test with the statistical measurement metrics sensitivity and specificity.},\n  keywords = {Bluetooth;fuzzy neural nets;radiofrequency interference;signal classification;software radio;wireless LAN;active coexistence management measures;neuro-fuzzy signal classifier;Matlab-Simulink;wideband software defined radio;Ettus USRP N210;heterogeneous industrial scenario;harsh industrial scenario;license-free ISM radio band;standard wireless technology;IEEE 802.11g-based WLAN;Bluetooth;binary classification test;statistical measurement metrics;crowded spectrum band;real-time industrial wireless systems;fuzzified neural networks;real-time industrial wireless interference identification;frequency 2.4 GHz;Wireless communication;Shape;Wireless sensor networks;Bandwidth;Interference;IEEE 802.15 Standard;Feature extraction},\n  doi = {10.1109/EUSIPCO.2016.7760546},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256666.pdf},\n}\n\n
\n
\n\n\n
\n Real-time industrial wireless systems sharing a crowded spectrum band require active coexistence management measures. Identification of wireless interference is a key issue for this purpose. We propose an efficient implementation of a wireless interference identification (WII) approach called neuro-fuzzy signal classifier (NFSC). The implementation in Matlab / SIMULINK is based upon the wideband software defined radio Ettus USRP N210. The implementation is evaluated in six selected heterogeneous and harsh industrial scenarios within the license-free 2.4-GHz-ISM radio band with variously combined standard wireless technologies IEEE 802.11g-based WLAN and Bluetooth. The evaluation of the NFSC was performed with a binary classification test with the statistical measurement metrics sensitivity and specificity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An experimental approach to generalized Wiener filtering in music source separation.\n \n \n \n \n\n\n \n Dittmar, C.; Driedger, J.; Müller, M.; and Paulus, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1743-1747, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760547,\n  author = {C. Dittmar and J. Driedger and M. Müller and J. Paulus},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {An experimental approach to generalized Wiener filtering in music source separation},\n  year = {2016},\n  pages = {1743-1747},\n  abstract = {Music source separation aims at decomposing music recordings into their constituent component signals. Many existing techniques are based on separating a time-frequency representation of the mixture signal by applying suitable modeling techniques in conjunction with generalized Wiener filtering. Recently, the term α-Wiener filtering was coined together with a theoretic foundation for the long-practiced use of magnitude spectrogram estimates in Wiener filtering. So far, optimal values for the magnitude exponent α have been empirically found in oracle experiments regarding the additivity of spectral magnitudes. In the first part of this paper, we extend these previous studies by examining further factors that affect the choice of α. In the second part, we investigate the role of α in Kernel Additive Modeling applied to Harmonic-Percussive Separation. Our results indicate that the parameter α may be understood as a kind of selectivity parameter, which should be chosen in a signal-adaptive fashion.},\n  keywords = {electronic music;signal representation;source separation;Wiener filters;generalized Wiener filtering;music source separation;music recordings;constituent component signals;time-frequency representation;mixture signal;α-Wiener filtering;magnitude spectrogram;oracle experiments;spectral magnitudes;kernel additive modeling;harmonic-percussive separation;signal-adaptive fashion;Harmonic analysis;Source separation;Instruments;Kernel;Spectrogram;Power harmonic filters;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760547},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570245421.pdf},\n}\n\n
\n
\n\n\n
\n Music source separation aims at decomposing music recordings into their constituent component signals. Many existing techniques are based on separating a time-frequency representation of the mixture signal by applying suitable modeling techniques in conjunction with generalized Wiener filtering. Recently, the term α-Wiener filtering was coined together with a theoretic foundation for the long-practiced use of magnitude spectrogram estimates in Wiener filtering. So far, optimal values for the magnitude exponent α have been empirically found in oracle experiments regarding the additivity of spectral magnitudes. In the first part of this paper, we extend these previous studies by examining further factors that affect the choice of α. In the second part, we investigate the role of α in Kernel Additive Modeling applied to Harmonic-Percussive Separation. Our results indicate that the parameter α may be understood as a kind of selectivity parameter, which should be chosen in a signal-adaptive fashion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multichannel music separation with deep neural networks.\n \n \n \n \n\n\n \n Nugraha, A. A.; Liutkus, A.; and Vincent, E.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1748-1752, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MultichannelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760548,\n  author = {A. A. Nugraha and A. Liutkus and E. Vincent},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Multichannel music separation with deep neural networks},\n  year = {2016},\n  pages = {1748-1752},\n  abstract = {This article addresses the problem of multichannel music separation. We propose a framework where the source spectra are estimated using deep neural networks and combined with spatial covariance matrices to encode the source spatial characteristics. The parameters are estimated in an iterative expectation-maximization fashion and used to derive a multichannel Wiener filter. We evaluate the proposed framework for the task of music separation on a large dataset. Experimental results show that the method we describe performs consistently well in separating singing voice and other instruments from realistic musical mixtures.},\n  keywords = {covariance matrices;expectation-maximisation algorithm;music;neural nets;parameter estimation;source separation;Wiener filters;multichannel music separation;deep neural networks;source spectra;spatial covariance matrices;source spatial characteristics;parameter estimation;iterative expectation-maximization;multichannel Wiener filter;singing voice;musical mixtures;Covariance matrices;Spectrogram;Source separation;Training;Signal processing algorithms;Iterative methods;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760548},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252199.pdf},\n}\n\n
\n
\n\n\n
\n This article addresses the problem of multichannel music separation. We propose a framework where the source spectra are estimated using deep neural networks and combined with spatial covariance matrices to encode the source spatial characteristics. The parameters are estimated in an iterative expectation-maximization fashion and used to derive a multichannel Wiener filter. We evaluate the proposed framework for the task of music separation on a large dataset. Experimental results show that the method we describe performs consistently well in separating singing voice and other instruments from realistic musical mixtures.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Underdetermined source separation using a sparse STFT framework and weighted laplacian directional modelling.\n \n \n \n \n\n\n \n Sgouros, T.; and Mitianoudis, N.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1753-1757, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"UnderdeterminedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760549,\n  author = {T. Sgouros and N. Mitianoudis},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Underdetermined source separation using a sparse STFT framework and weighted laplacian directional modelling},\n  year = {2016},\n  pages = {1753-1757},\n  abstract = {The instantaneous underdetermined audio source separation problem of K-sensors, L-sources mixing scenario (where K <; L) has been addressed by many different approaches, provided the sources remain quite distinct in the virtual positioning space spanned by the sensors. This problem can be tackled as a directional clustering problem along the source position angles in the mixture. The use of Generalised Directional Laplacian Densities (DLD) in the MDCT domain for underdetermined source separation has been proposed before. Here, we derive weighted mixtures of DLDs in a sparser representation of the data in the STFT domain to perform separation. The proposed approach yields improved results compared to our previous offering and compares favourably with the state-of-the-art.},\n  keywords = {Laplace transforms;source separation;sparse STFT framework;weighted Laplacian directional modelling;underdetermined audio source separation problem;directional clustering problem;generalised directional Laplacian densities;DLD;MDCT;Time-frequency analysis;Signal processing algorithms;Source separation;Laplace equations;Covariance matrices;Europe;Underdetermined Audio Source Separation;Weighted Directional Mixture Models},\n  doi = {10.1109/EUSIPCO.2016.7760549},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255562.pdf},\n}\n\n
\n
\n\n\n
\n The instantaneous underdetermined audio source separation problem of K-sensors, L-sources mixing scenario (where K <; L) has been addressed by many different approaches, provided the sources remain quite distinct in the virtual positioning space spanned by the sensors. This problem can be tackled as a directional clustering problem along the source position angles in the mixture. The use of Generalised Directional Laplacian Densities (DLD) in the MDCT domain for underdetermined source separation has been proposed before. Here, we derive weighted mixtures of DLDs in a sparser representation of the data in the STFT domain to perform separation. The proposed approach yields improved results compared to our previous offering and compares favourably with the state-of-the-art.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evaluation of quality of sound source separation algorithms: Human perception vs quantitative metrics.\n \n \n \n \n\n\n \n Cano, E.; FitzGerald, D.; and Brandenburg, K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1758-1762, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EvaluationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760550,\n  author = {E. Cano and D. FitzGerald and K. Brandenburg},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Evaluation of quality of sound source separation algorithms: Human perception vs quantitative metrics},\n  year = {2016},\n  pages = {1758-1762},\n  abstract = {In this paper we look into the test methods to evaluate the quality of audio separation algorithms. Specifically we try to correlate the results of listening tests with state-of-the-art objective measures. To this end, the quality of the harmonic signals obtained with two harmonic-percussive separation algorithms was evaluated with BSS_Eval, PEASS and via listening tests. A correlation analysis was conducted and results show that for harmonic-percussive separation algorithms, neither BSS_Eval nor PEASS show strong correlation with the ratings obtained via listening tests and suggest that existing perceptual objective measures for quality assessment do not generalize well to different separation algorithms.},\n  keywords = {audio signal processing;blind source separation;correlation methods;hearing;sound source separation algorithms;human perception;quantitative metrics;audio separation algorithms;listening tests;harmonic signals;harmonic-percussive separation algorithms;BSS_Eval;PEASS;correlation analysis;Correlation;Harmonic analysis;Signal processing algorithms;Distortion;Measurement;Algorithm design and analysis;Interference},\n  doi = {10.1109/EUSIPCO.2016.7760550},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256073.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we look into the test methods to evaluate the quality of audio separation algorithms. Specifically we try to correlate the results of listening tests with state-of-the-art objective measures. To this end, the quality of the harmonic signals obtained with two harmonic-percussive separation algorithms was evaluated with BSS_Eval, PEASS and via listening tests. A correlation analysis was conducted and results show that for harmonic-percussive separation algorithms, neither BSS_Eval nor PEASS show strong correlation with the ratings obtained via listening tests and suggest that existing perceptual objective measures for quality assessment do not generalize well to different separation algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evaluation of audio source separation models using hypothesis-driven non-parametric statistical methods.\n \n \n \n \n\n\n \n Simpson, A. J. R.; Roma, G.; Grais, E. M.; Mason, R. D.; Hummersone, C.; Liutkus, A.; and Plumbley, M. D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1763-1767, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EvaluationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760551,\n  author = {A. J. R. Simpson and G. Roma and E. M. Grais and R. D. Mason and C. Hummersone and A. Liutkus and M. D. Plumbley},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Evaluation of audio source separation models using hypothesis-driven non-parametric statistical methods},\n  year = {2016},\n  pages = {1763-1767},\n  abstract = {Audio source separation models are typically evaluated using objective separation quality measures, but rigorous statistical methods have yet to be applied to the problem of model comparison. As a result, it can be difficult to establish whether or not reliable progress is being made during the development of new models. In this paper, we provide a hypothesis-driven statistical analysis of the results of the recent source separation SiSEC challenge involving twelve competing models tested on separation of voice and accompaniment from fifty pieces of “professionally produced” contemporary music. Using non-parametric statistics, we establish reliable evidence for meaningful conclusions about the performance of the various models.},\n  keywords = {audio signal processing;nonparametric statistics;source separation;statistical analysis;professionally produced contemporary music;hypothesis-driven statistical analysis;objective separation quality measurement;hypothesis-driven nonparametric statistical method;audio source separation model evaluation;Source separation;Data models;Statistical analysis;Europe;Reliability;Analytical models;Audio source separation;BSSeval;SiSEC;Hypothesis test},\n  doi = {10.1109/EUSIPCO.2016.7760551},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256231.pdf},\n}\n\n
\n
\n\n\n
\n Audio source separation models are typically evaluated using objective separation quality measures, but rigorous statistical methods have yet to be applied to the problem of model comparison. As a result, it can be difficult to establish whether or not reliable progress is being made during the development of new models. In this paper, we provide a hypothesis-driven statistical analysis of the results of the recent source separation SiSEC challenge involving twelve competing models tested on separation of voice and accompaniment from fifty pieces of “professionally produced” contemporary music. Using non-parametric statistics, we establish reliable evidence for meaningful conclusions about the performance of the various models.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A split kernel adaptive filtering architecture for nonlinear acoustic echo cancellation.\n \n \n \n \n\n\n \n Van Vaerenbergh, S.; Azpicueta-Ruiz, L. A.; and Comminiello, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1768-1772, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760552,\n  author = {S. {Van Vaerenbergh} and L. A. Azpicueta-Ruiz and D. Comminiello},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A split kernel adaptive filtering architecture for nonlinear acoustic echo cancellation},\n  year = {2016},\n  pages = {1768-1772},\n  abstract = {We propose a new linear-in-the-parameters (LIP) nonlinear filter based on kernel methods to address the problem of nonlinear acoustic echo cancellation (NAEC). For this purpose we define a framework based on a parallel scheme in which any kernel-based adaptive filter (KAF) can be incorporated efficiently. This structure is composed of a classic adaptive filter on one branch, committed to estimating the linear part of the echo path, and a kernel adaptive filter on the other branch, to model the nonlinearities rebounding in the echo path. In addition, we propose a novel low-complexity least mean square (LMS) KAF with very few parameters, to be used in the parallel architecture. Finally, we demonstrate the effectiveness of the proposed scheme in real NAEC scenarios, for different choices of the KAF.},\n  keywords = {acoustic signal processing;adaptive filters;echo suppression;least mean squares methods;nonlinear filters;LMS KAF;low-complexity least mean square KAF;nonlinearity rebounding;parallel scheme;NAEC;kernel method;linear-in-the-parameter nonlinear filter;nonlinear acoustic echo cancellation;split kernel adaptive filtering architecture;Kernel;Signal processing algorithms;Dictionaries;Adaptation models;Mathematical model;Nonlinear distortion},\n  doi = {10.1109/EUSIPCO.2016.7760552},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256300.pdf},\n}\n\n
\n
\n\n\n
\n We propose a new linear-in-the-parameters (LIP) nonlinear filter based on kernel methods to address the problem of nonlinear acoustic echo cancellation (NAEC). For this purpose we define a framework based on a parallel scheme in which any kernel-based adaptive filter (KAF) can be incorporated efficiently. This structure is composed of a classic adaptive filter on one branch, committed to estimating the linear part of the echo path, and a kernel adaptive filter on the other branch, to model the nonlinearities rebounding in the echo path. In addition, we propose a novel low-complexity least mean square (LMS) KAF with very few parameters, to be used in the parallel architecture. Finally, we demonstrate the effectiveness of the proposed scheme in real NAEC scenarios, for different choices of the KAF.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Combination of filtered-x adaptive filters for nonlinear listening-room compensation.\n \n \n \n \n\n\n \n Fuster, L.; Ferrer, M.; de Diego , M.; and Gonzalez, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1773-1777, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CombinationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760553,\n  author = {L. Fuster and M. Ferrer and M. {de Diego} and A. Gonzalez},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Combination of filtered-x adaptive filters for nonlinear listening-room compensation},\n  year = {2016},\n  pages = {1773-1777},\n  abstract = {Audio quality in sound reproduction systems can be severely degraded due to system nonlinearities and reverberation effects. In this context, linearization of loudspeakers has been deeply investigated but its combination with room equalization is not straightforward, mainly when the nonlinearities present memory. In this paper, a method relying on the convex combination of two linear filters using the filtered-x LMS (FXLMS) algorithm and based on the virtual path concept to preprocess audio signals is presented for nonlinear room compensation. It is shown that the combination of two linear adaptive filters behaves similarly to the filtered-x second-order adaptive Volterra (NFXLMS) filter. Moreover the new approach is computationally more efficient and avoids the generation of higher harmonics. Experimental results validate the performance of the new approach.},\n  keywords = {adaptive filters;audio signal processing;nonlinear filters;reverberation;filtered-x adaptive filters;nonlinear listening-room compensation;audio quality;sound reproduction systems;system nonlinearities;reverberation effects;linear adaptive filters;filtered-x LMS algorithm;FXLMS algorithm;virtual path concept;audio signal preprocessing;filtered-x second-order adaptive Volterra filter;NFXLMS filter;linear filters;Adaptive systems;Kernel;Signal processing algorithms;Loudspeakers;Nonlinear acoustics;Adaptive equalizers;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760553},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255797.pdf},\n}\n\n
\n
\n\n\n
\n Audio quality in sound reproduction systems can be severely degraded due to system nonlinearities and reverberation effects. In this context, linearization of loudspeakers has been deeply investigated but its combination with room equalization is not straightforward, mainly when the nonlinearities present memory. In this paper, a method relying on the convex combination of two linear filters using the filtered-x LMS (FXLMS) algorithm and based on the virtual path concept to preprocess audio signals is presented for nonlinear room compensation. It is shown that the combination of two linear adaptive filters behaves similarly to the filtered-x second-order adaptive Volterra (NFXLMS) filter. Moreover the new approach is computationally more efficient and avoids the generation of higher harmonics. Experimental results validate the performance of the new approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A novel reduced-rank approach for implementing Volterra filters.\n \n \n \n \n\n\n \n Ortiz Batista, E. L.; and Seara, R.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1778-1782, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760554,\n  author = {E. L. {Ortiz Batista} and R. Seara},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A novel reduced-rank approach for implementing Volterra filters},\n  year = {2016},\n  pages = {1778-1782},\n  abstract = {This paper presents a novel reduced-rank approach for implementing Volterra filters with reduced complexity. Such an approach is based on the application of the singular value decomposition to a new form of coefficient matrix obtained by exploiting the representation based on diagonal coordinates of the Volterra kernels. The result is a parallel structure of extended Hammerstein models in which each branch is related to one of the singular values of the coefficient matrix. Then, removing the branches related to the smallest singular values, an effective reduced-complexity Volterra implementation is obtained. Simulation results are presented to confirm the effectiveness of the proposed approach.},\n  keywords = {matrix algebra;nonlinear filters;signal representation;singular value decomposition;reduced-rank approach;Volterra filters;singular value decomposition;coefficient matrix;Volterra kernels;extended Hammerstein models;Kernel;Matrix decomposition;Periodic structures;Singular value decomposition;Complexity theory;Finite impulse response filters;Linear-in-the-parameters filters;reduced-rank implementation;Volterra filters},\n  doi = {10.1109/EUSIPCO.2016.7760554},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256160.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a novel reduced-rank approach for implementing Volterra filters with reduced complexity. Such an approach is based on the application of the singular value decomposition to a new form of coefficient matrix obtained by exploiting the representation based on diagonal coordinates of the Volterra kernels. The result is a parallel structure of extended Hammerstein models in which each branch is related to one of the singular values of the coefficient matrix. Then, removing the branches related to the smallest singular values, an effective reduced-complexity Volterra implementation is obtained. Simulation results are presented to confirm the effectiveness of the proposed approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient nonlinear acoustic echo cancellation by partitioned-block Significance-Aware Hammerstein Group Models.\n \n \n \n \n\n\n \n Hofmann, C.; Guenther, M.; Huemmer, C.; and Kellermann, W.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1783-1787, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760555,\n  author = {C. Hofmann and M. Guenther and C. Huemmer and W. Kellermann},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient nonlinear acoustic echo cancellation by partitioned-block Significance-Aware Hammerstein Group Models},\n  year = {2016},\n  pages = {1783-1787},\n  abstract = {A powerful and efficient model for nonlinear echo paths of hands-free communication systems is given by the recently proposed Significance-Aware Hammerstein Group Model (SA-HGM). Such a model learns memoryless loudspeaker nonlinearities on a small temporal support of the echo path (preferably the direct-sound region) and extrapolates the nonlinearities for the entire echo path afterwards. In this contribution, an efficient frequency-domain realization of the significance-aware concept for nonlinear acoustic echo cancellation is proposed. The proposed method exploits the benefits of partitioned-block frequency-domain adaptive filtering and will therefore be referred to as Partitioned-Block Significance-Aware Hammerstein Group Model (PBSA-HGM). This allows to efficiently model a long nonlinear echo path by a linear partitioned-block frequency-domain adaptive filter after a parametric memoryless nonlinear preprocessor, the parameters of which are estimated via a nonlinear Hammerstein Group Model (HGM) with the short temporal support of a single block only.},\n  keywords = {adaptive filters;echo suppression;group theory;loudspeakers;efficient nonlinear acoustic echo cancellation;partitioned-block Hammerstein group models;significance-aware Hammerstein group models;nonlinear echo paths;hands-free communication systems;memoryless loudspeaker nonlinearities;small temporal support;frequency-domain adaptive filtering;linear partitioned-block adaptive filter;parametric memoryless nonlinear preprocessor;nonlinear Hammerstein group model;Frequency-domain analysis;Adaptation models;Discrete Fourier transforms;Convolution;Kernel;Loudspeakers;Atmospheric modeling},\n  doi = {10.1109/EUSIPCO.2016.7760555},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256158.pdf},\n}\n\n
\n
\n\n\n
\n A powerful and efficient model for nonlinear echo paths of hands-free communication systems is given by the recently proposed Significance-Aware Hammerstein Group Model (SA-HGM). Such a model learns memoryless loudspeaker nonlinearities on a small temporal support of the echo path (preferably the direct-sound region) and extrapolates the nonlinearities for the entire echo path afterwards. In this contribution, an efficient frequency-domain realization of the significance-aware concept for nonlinear acoustic echo cancellation is proposed. The proposed method exploits the benefits of partitioned-block frequency-domain adaptive filtering and will therefore be referred to as Partitioned-Block Significance-Aware Hammerstein Group Model (PBSA-HGM). This allows to efficiently model a long nonlinear echo path by a linear partitioned-block frequency-domain adaptive filter after a parametric memoryless nonlinear preprocessor, the parameters of which are estimated via a nonlinear Hammerstein Group Model (HGM) with the short temporal support of a single block only.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perfect periodic sequences for nonlinear Wiener filters.\n \n \n \n \n\n\n \n Carini, A.; Romoli, L.; Cecchi, S.; and Orcioni, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1788-1792, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"PerfectPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760556,\n  author = {A. Carini and L. Romoli and S. Cecchi and S. Orcioni},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Perfect periodic sequences for nonlinear Wiener filters},\n  year = {2016},\n  pages = {1788-1792},\n  abstract = {A periodic sequence is defined as a perfect periodic sequence for a certain nonlinear filter if the cross-correlation between any two of the filter basis functions, estimated over a period, is zero. Using a perfect periodic sequence as input signal, an unknown nonlinear system can be efficiently identified with the cross-correlation method. Moreover, the basis functions that guarantee the most compact representation according to some information criterion can also be easily estimated. Perfect periodic sequences have already been developed for even mirror Fourier, Legendre and Chebyshev nonlinear filters. In this paper, we show they can be developed also for nonlinear Wiener filters. Their development is non-trivial and differs from that of the other nonlinear filters, since Wiener filters have orthogonal basis functions for white Gaussian input signals. Experimental results highlight the usefulness of the proposed perfect periodic sequences in comparison with the Gaussian input signals commonly used for Wiener filter identification.},\n  keywords = {correlation methods;Gaussian processes;nonlinear filters;sequences;Wiener filters;perfect periodic sequences;cross-correlation method;information criterion;nonlinear Wiener filters;orthogonal basis functions;white Gaussian input signals;Nonlinear systems;Newton method;Europe;Signal processing;Correlation;Nonlinear equations;Wiener filters;Perfect periodic sequences;nonlinear Wiener filters;cross-correlation method},\n  doi = {10.1109/EUSIPCO.2016.7760556},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255690.pdf},\n}\n\n
\n
\n\n\n
\n A periodic sequence is defined as a perfect periodic sequence for a certain nonlinear filter if the cross-correlation between any two of the filter basis functions, estimated over a period, is zero. Using a perfect periodic sequence as input signal, an unknown nonlinear system can be efficiently identified with the cross-correlation method. Moreover, the basis functions that guarantee the most compact representation according to some information criterion can also be easily estimated. Perfect periodic sequences have already been developed for even mirror Fourier, Legendre and Chebyshev nonlinear filters. In this paper, we show they can be developed also for nonlinear Wiener filters. Their development is non-trivial and differs from that of the other nonlinear filters, since Wiener filters have orthogonal basis functions for white Gaussian input signals. Experimental results highlight the usefulness of the proposed perfect periodic sequences in comparison with the Gaussian input signals commonly used for Wiener filter identification.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online dual coordinate ascent learning.\n \n \n \n \n\n\n \n Ying, B.; Yuan, K.; and Sayed, A. H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1793-1797, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnlinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760557,\n  author = {B. Ying and K. Yuan and A. H. Sayed},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Online dual coordinate ascent learning},\n  year = {2016},\n  pages = {1793-1797},\n  abstract = {The stochastic dual coordinate-ascent (S-DCA) technique is a useful alternative to the traditional stochastic gradient-descent algorithm for solving large-scale optimization problems due to its scalability to large data sets and strong theoretical guarantees. However, the available S-DCA formulation is limited to finite sample sizes and relies on performing multiple passes over the same data. This formulation is not well-suited for online implementations where data keep streaming in. In this work, we develop an online dual coordinate-ascent (O-DCA) algorithm that is able to respond to streaming data and does not need to revisit the past data. This feature infuses the resulting construction with continuous adaptation, learning, and tracking abilities, which are particularly useful for online learning scenarios.},\n  keywords = {gradient methods;learning (artificial intelligence);stochastic processes;stochastic dual coordinate-ascent formulation;S-DCA formulation;stochastic gradient-descent algorithm;large-scale optimization problem;online dual coordinate-ascent learning algorithm;O-DCA learning algorithm;Signal processing algorithms;Europe;Signal processing;Indexes;Steady-state;Convergence;Electrical engineering;Online algorithm;dual coordinate-ascent;stochastic gradient-descent;stochastic proximal gradient;adaptation;learning;support-vector machine},\n  doi = {10.1109/EUSIPCO.2016.7760557},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255969.pdf},\n}\n\n
\n
\n\n\n
\n The stochastic dual coordinate-ascent (S-DCA) technique is a useful alternative to the traditional stochastic gradient-descent algorithm for solving large-scale optimization problems due to its scalability to large data sets and strong theoretical guarantees. However, the available S-DCA formulation is limited to finite sample sizes and relies on performing multiple passes over the same data. This formulation is not well-suited for online implementations where data keep streaming in. In this work, we develop an online dual coordinate-ascent (O-DCA) algorithm that is able to respond to streaming data and does not need to revisit the past data. This feature infuses the resulting construction with continuous adaptation, learning, and tracking abilities, which are particularly useful for online learning scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A simple set-membership affine projection algorithm for sparse system modeling.\n \n \n \n \n\n\n \n Yazdanpanah, H.; Diniz, P. S. R.; and Lima, M. V. S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1798-1802, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760558,\n  author = {H. Yazdanpanah and P. S. R. Diniz and M. V. S. Lima},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A simple set-membership affine projection algorithm for sparse system modeling},\n  year = {2016},\n  pages = {1798-1802},\n  abstract = {In this paper, we derive two algorithms, namely the Simple Set-Membership Affine Projection (S-SM-AP) and the improved S-SM-AP (IS-SM-AP), in order to exploit the sparsity of an unknown system while focusing on having low computational complexity. To achieve this goal, the proposed algorithms apply a discard function on the weight vector to disregard the coefficients close to zero during the update process. In addition, the IS-SM-AP algorithm reduces the overall number of computations required by the adaptive filter even further by replacing small coefficients with zero. Simulation results show similar performance when comparing the proposed algorithm with some existing state-of-the-art sparsity-aware algorithms while the proposed algorithms require lower computational complexity.},\n  keywords = {adaptive filters;computational complexity;vectors;simple set-membership affine projection algorithm;sparse system modeling;improved S-SM-AP;computational complexity;discard function;weight vector;IS-SM-AP algorithm;adaptive filter;sparsity-aware algorithms;Signal processing algorithms;Computational complexity;Adaptation models;Approximation algorithms;Estimation error;Europe;Signal processing;adaptive filtering;set-membership filtering;sparsity;discard function;computational complexity},\n  doi = {10.1109/EUSIPCO.2016.7760558},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255973.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we derive two algorithms, namely the Simple Set-Membership Affine Projection (S-SM-AP) and the improved S-SM-AP (IS-SM-AP), in order to exploit the sparsity of an unknown system while focusing on having low computational complexity. To achieve this goal, the proposed algorithms apply a discard function on the weight vector to disregard the coefficients close to zero during the update process. In addition, the IS-SM-AP algorithm reduces the overall number of computations required by the adaptive filter even further by replacing small coefficients with zero. Simulation results show similar performance when comparing the proposed algorithm with some existing state-of-the-art sparsity-aware algorithms while the proposed algorithms require lower computational complexity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A family of optimized LMS-based algorithms for system identification.\n \n \n \n \n\n\n \n Ciochină, S.; Paleologu, C.; Benesty, J.; Grant, S. L.; and Anghel, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1803-1807, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760559,\n  author = {S. Ciochină and C. Paleologu and J. Benesty and S. L. Grant and A. Anghel},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A family of optimized LMS-based algorithms for system identification},\n  year = {2016},\n  pages = {1803-1807},\n  abstract = {The performance of the least-mean-square (LMS) algorithm is governed by its step-size parameter. In this paper, we present a family of optimized LMS-based algorithms (in terms of the step-size control), in the context of system identification. A time-variant system model is considered and the optimization criterion is based on the minimization of the system misalignment. Simulations performed in the context of acoustic echo cancellation indicate that these algorithms achieve a proper compromise in terms of fast convergence/tracking and low misadjustment.},\n  keywords = {echo suppression;least mean squares methods;optimized LMS-based algorithms;system identification;least-mean-square algorithm;step-size control;optimization criterion;time-variant system model;system misalignment;acoustic echo cancellation;convergence-tracking;Signal processing algorithms;Adaptation models;Echo cancellers;Context;Minimization;Algorithm design and analysis},\n  doi = {10.1109/EUSIPCO.2016.7760559},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256094.pdf},\n}\n\n
\n
\n\n\n
\n The performance of the least-mean-square (LMS) algorithm is governed by its step-size parameter. In this paper, we present a family of optimized LMS-based algorithms (in terms of the step-size control), in the context of system identification. A time-variant system model is considered and the optimization criterion is based on the minimization of the system misalignment. Simulations performed in the context of acoustic echo cancellation indicate that these algorithms achieve a proper compromise in terms of fast convergence/tracking and low misadjustment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online dictionary learning from large-scale binary data.\n \n \n \n \n\n\n \n Shen, Y.; and Giannakis, G. B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1808-1812, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnlinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760560,\n  author = {Y. Shen and G. B. Giannakis},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Online dictionary learning from large-scale binary data},\n  year = {2016},\n  pages = {1808-1812},\n  abstract = {Compressive sensing (CS) has been shown useful for reducing dimensionality, by exploiting signal sparsity inherent to specific domain representations of data. Traditional CS approaches represent the signal as a sparse linear combination of basis vectors from a prescribed dictionary. However, it is often impractical to presume accurate knowledge of the basis, which motivates data-driven dictionary learning. Moreover, in large-scale settings one may only afford to acquire quantized measurements, which may arrive sequentially in a streaming fashion. The present paper jointly learns the sparse signal representation and the unknown dictionary when only binary streaming measurements with possible misses are available. To this end, a novel efficient online estimator with closed-form sequential updates is put forth to recover the sparse representation, while refining the dictionary `on the fly'. Numerical tests on simulated and real data corroborate the efficacy of the novel approach.},\n  keywords = {compressed sensing;signal representation;vectors;streaming;online dictionary learning;closed-form sequential updates;online estimator;sparse signal representation;basis vectors;sparse linear combination;CS;compressive sensing;binary data;Dictionaries;Signal processing algorithms;Convergence;Algorithm design and analysis;Europe;Signal processing;Minimization;dictionary learning;binary data;online learning},\n  doi = {10.1109/EUSIPCO.2016.7760560},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256646.pdf},\n}\n\n
\n
\n\n\n
\n Compressive sensing (CS) has been shown useful for reducing dimensionality, by exploiting signal sparsity inherent to specific domain representations of data. Traditional CS approaches represent the signal as a sparse linear combination of basis vectors from a prescribed dictionary. However, it is often impractical to presume accurate knowledge of the basis, which motivates data-driven dictionary learning. Moreover, in large-scale settings one may only afford to acquire quantized measurements, which may arrive sequentially in a streaming fashion. The present paper jointly learns the sparse signal representation and the unknown dictionary when only binary streaming measurements with possible misses are available. To this end, a novel efficient online estimator with closed-form sequential updates is put forth to recover the sparse representation, while refining the dictionary `on the fly'. Numerical tests on simulated and real data corroborate the efficacy of the novel approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stochastic forward-backward and primal-dual approximation algorithms with application to online image restoration.\n \n \n \n \n\n\n \n Combettes, P. L.; and Pesquet, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1813-1817, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"StochasticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760561,\n  author = {P. L. Combettes and J. Pesquet},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Stochastic forward-backward and primal-dual approximation algorithms with application to online image restoration},\n  year = {2016},\n  pages = {1813-1817},\n  abstract = {Stochastic approximation techniques have been used in various contexts in data science. We propose a stochastic version of the forward-backward algorithm for minimizing the sum of two convex functions, one of which is not necessarily smooth. Our framework can handle stochastic approximations of the gradient of the smooth function and allows for stochastic errors in the evaluation of the proximity operator of the nonsmooth function. The almost sure convergence of the iterates generated by the algorithm to a minimizer is established under relatively mild assumptions. We also propose a stochastic version of a popular primal-dual proximal splitting algorithm, establish its convergence, and apply it to an online image restoration problem.},\n  keywords = {approximation theory;convex programming;gradient methods;image restoration;iterative methods;minimisation;stochastic programming;stochastic forward-backward algorithm;primal-dual approximation algorithm;online image restoration;data science;two convex function sum minimization;smooth function gradient;nonsmooth function proximity operator evaluation;primal-dual proximal splitting algorithm;Signal processing algorithms;Stochastic processes;Random variables;Signal processing;Convergence;Convex functions;Europe;convex optimization;nonsmooth optimization;primal-dual algorithm;stochastic algorithm;parallel algorithm;proximity operator;recovery;image restoration},\n  doi = {10.1109/EUSIPCO.2016.7760561},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256302.pdf},\n}\n\n
\n
\n\n\n
\n Stochastic approximation techniques have been used in various contexts in data science. We propose a stochastic version of the forward-backward algorithm for minimizing the sum of two convex functions, one of which is not necessarily smooth. Our framework can handle stochastic approximations of the gradient of the smooth function and allows for stochastic errors in the evaluation of the proximity operator of the nonsmooth function. The almost sure convergence of the iterates generated by the algorithm to a minimizer is established under relatively mild assumptions. We also propose a stochastic version of a popular primal-dual proximal splitting algorithm, establish its convergence, and apply it to an online image restoration problem.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n UAV routing protocol for crop health management.\n \n \n \n \n\n\n \n Ammad-udin, M.; Mansour, A.; Le Jeune, D.; Aggoune, E. H. M.; and Ayaz, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1818-1822, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"UAVPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760562,\n  author = {M. Ammad-udin and A. Mansour and D. {Le Jeune} and E. H. M. Aggoune and M. Ayaz},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {UAV routing protocol for crop health management},\n  year = {2016},\n  pages = {1818-1822},\n  abstract = {Wireless sensor networks are now a credible means for crop data collection. The installation of a fixed communication structure to relay the monitored data from the cluster head to its final destination can either be impractical because of land topology or prohibitive due to high initial cost. A plausible solution is to use Unmanned Aerial Vehicles (UAV) as an alternative means for both data collection and limited supervisory control of sensors status. In this paper, we consider the case of disjoint farming parcels each including clusters of sensors, organized in a predetermined way according to farming objectives. This research focuses to drive an optimal solution for UAV search and data gathering from all sensors installed in a crop field. Furthermore, the sensor routing protocol will take into account a tradeoff between energy management and data dissemination overhead. The proposed system is evaluated by using a simulated model and it should find out a class among all under consideration.},\n  keywords = {agricultural engineering;autonomous aerial vehicles;crops;data acquisition;energy conservation;information dissemination;routing protocols;wireless sensor networks;UAV routing protocol;crop health management;wireless sensor networks;crop data collection;communication structure;cluster head;land topology;unmanned aerial vehicles;disjoint farming parcels;UAV search;data gathering;sensor routing protocol;energy management;data dissemination overhead;Agriculture;Mobile communication;Sensor phenomena and characterization;Data collection;Wireless sensor networks;Unmanned aerial vehicles;Smart farming;Routing protocol;Precision agriculture;Data gathering;Wireless sensor network;UAV},\n  doi = {10.1109/EUSIPCO.2016.7760562},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251966.pdf},\n}\n\n
\n
\n\n\n
\n Wireless sensor networks are now a credible means for crop data collection. The installation of a fixed communication structure to relay the monitored data from the cluster head to its final destination can either be impractical because of land topology or prohibitive due to high initial cost. A plausible solution is to use Unmanned Aerial Vehicles (UAV) as an alternative means for both data collection and limited supervisory control of sensors status. In this paper, we consider the case of disjoint farming parcels each including clusters of sensors, organized in a predetermined way according to farming objectives. This research focuses to drive an optimal solution for UAV search and data gathering from all sensors installed in a crop field. Furthermore, the sensor routing protocol will take into account a tradeoff between energy management and data dissemination overhead. The proposed system is evaluated by using a simulated model and it should find out a class among all under consideration.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Offloading to neighbouring nodes in smart camera network.\n \n \n \n \n\n\n \n Sthapit, S.; Hopgood, J. R.; Robertson, N. M.; and Thompson, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1823-1827, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OffloadingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760563,\n  author = {S. Sthapit and J. R. Hopgood and N. M. Robertson and J. Thompson},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Offloading to neighbouring nodes in smart camera network},\n  year = {2016},\n  pages = {1823-1827},\n  abstract = {Mobile Cloud Computing refers to offloading computationally intensive algorithms from a mobile device to a cloud in order to save resources (time and energy) in the mobile device. But when the connection to the cloud is non-existent or limited, as in battle-space scenarios, exploiting neighbouring devices could be an alternative. In this paper we have developed a framework to offload computationally intensive algorithms to neighbours in order to minimise the algorithm completion time. We propose resource allocation algorithms to maximize the performance of these systems in real-time computer vision applications (drop less targets). Results show significant performance improvement at the cost of using some extra energy resource. Finally we define a new performance metric which also incorporates the energy consumed and is used to compare the offloading algorithms.},\n  keywords = {cameras;cloud computing;computer vision;minimisation;mobile computing;smart phones;smart camera network;neighbouring node;mobile cloud computing;mobile device;battle-space scenario;algorithm completion time minimisation;resource allocation algorithm;real-time computer vision application;Cloud computing;Signal processing algorithms;IEEE 802.11 Standard;Clocks;Energy consumption;Approximation algorithms;Central Processing Unit;Offloading;Mobile Cloud Computing;Energy},\n  doi = {10.1109/EUSIPCO.2016.7760563},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252313.pdf},\n}\n\n
\n
\n\n\n
\n Mobile Cloud Computing refers to offloading computationally intensive algorithms from a mobile device to a cloud in order to save resources (time and energy) in the mobile device. But when the connection to the cloud is non-existent or limited, as in battle-space scenarios, exploiting neighbouring devices could be an alternative. In this paper we have developed a framework to offload computationally intensive algorithms to neighbours in order to minimise the algorithm completion time. We propose resource allocation algorithms to maximize the performance of these systems in real-time computer vision applications (drop less targets). Results show significant performance improvement at the cost of using some extra energy resource. Finally we define a new performance metric which also incorporates the energy consumed and is used to compare the offloading algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Outage probability of an AF full-duplex physical-layer network coding system.\n \n \n \n \n\n\n \n Jebur, B. A.; Tsimenidis, C. C.; Johnston, M.; and Chambers, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1828-1832, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OutagePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760564,\n  author = {B. A. Jebur and C. C. Tsimenidis and M. Johnston and J. Chambers},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Outage probability of an AF full-duplex physical-layer network coding system},\n  year = {2016},\n  pages = {1828-1832},\n  abstract = {In this paper, we investigate the performance of a full-duplex (FD) physical-layer network coding (FD-PLNC) scheme using amplify-and-forward (AF) relaying and orthogonal frequency-division multiplexing (OFDM) for a two-way relay channel (TWRC) network over reciprocal, asymmetric, and frequency-selective Rayleigh fading channels. Furthermore, the proposed system is integrated with a self-interference cancellation (SIC) scheme to effectively reduce the self-interference (SI). Moreover, the impact of the residual SI on the performance of the proposed AF-FD-PLNC is examined. Closed-form expressions for the distribution of the end-to-end (E2E) signal to interference and noise ratio (SINR) and the outage probability are derived and presented. Furthermore, the analytical outage probability results are validated by simulation studies. The results confirm the feasibility of the proposed AF-FD-PLNC and its capability to double the throughput of conventional amplify-and-forward half-duplex physical-layer network coding (AF-HD-PLNC).},\n  keywords = {amplify and forward communication;interference suppression;network coding;OFDM modulation;probability;Rayleigh channels;relay networks (telecommunication);telecommunication network reliability;outage probability;amplify and forward half-duplex physical-layer network coding system;AF-FD-PLNC scheme;orthogonal frequency-division multiplexing;OFDM;two-way relay channel network;TWRC network;frequency-selective Rayleigh fading channels;self interference cancellation scheme;SIC scheme;end-to-end signal distribution;E2E signal distribution;SINR;Interference cancellation;Relays;Signal to noise ratio;OFDM;Network coding;Full-duplex physical-layer network coding (FD-PLNC);orthogonal frequency-division multiplexing (OFDM);amplify-and-forward (AF);self-interference cancellation (SIC);amplify-and-forward half-duplex physical-layer network coding (AF-HD-PLNC)},\n  doi = {10.1109/EUSIPCO.2016.7760564},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255524.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we investigate the performance of a full-duplex (FD) physical-layer network coding (FD-PLNC) scheme using amplify-and-forward (AF) relaying and orthogonal frequency-division multiplexing (OFDM) for a two-way relay channel (TWRC) network over reciprocal, asymmetric, and frequency-selective Rayleigh fading channels. Furthermore, the proposed system is integrated with a self-interference cancellation (SIC) scheme to effectively reduce the self-interference (SI). Moreover, the impact of the residual SI on the performance of the proposed AF-FD-PLNC is examined. Closed-form expressions for the distribution of the end-to-end (E2E) signal to interference and noise ratio (SINR) and the outage probability are derived and presented. Furthermore, the analytical outage probability results are validated by simulation studies. The results confirm the feasibility of the proposed AF-FD-PLNC and its capability to double the throughput of conventional amplify-and-forward half-duplex physical-layer network coding (AF-HD-PLNC).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Power control with partial observation in wireless ad hoc networks.\n \n \n \n \n\n\n \n Berri, S.; Lasaulce, S.; and Radjef, M. S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1833-1837, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"PowerPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760565,\n  author = {S. Berri and S. Lasaulce and M. S. Radjef},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Power control with partial observation in wireless ad hoc networks},\n  year = {2016},\n  pages = {1833-1837},\n  abstract = {In this paper, the well-known forwarder's dilemma is generalized by accounting for the presence of link quality fluctuations; the forwarder's dilemma is a four-node interaction model with two source nodes and two destination nodes. It is known to be very useful to study ad hoc networks. To characterize the long-term utility region when the source nodes have to control their power with partial channel state information (CSI), we resort to a recent result in Shannon theory. It is shown how to exploit this theoretical result to find the long-term utility region and determine good power control policies. This region is of prime importance since it provides the best performance possible for a given knowledge at the nodes. Numerical results provide several new insights into the repeated forwarder's dilemma power control problem; for instance, the knowledge of global CSI only brings a marginal performance improvement with respect to the local CSI case.},\n  keywords = {ad hoc networks;power control;wireless channels;partial observation;wireless ad hoc network;link quality fluctuation;four-node interaction model;long-term utility region;partial channel state information;CSI;Shannon theory;forwarder dilemma power control problem;Power control;Ad hoc networks;Games;Relays;Wireless communication;Numerical models;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760565},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256272.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, the well-known forwarder's dilemma is generalized by accounting for the presence of link quality fluctuations; the forwarder's dilemma is a four-node interaction model with two source nodes and two destination nodes. It is known to be very useful to study ad hoc networks. To characterize the long-term utility region when the source nodes have to control their power with partial channel state information (CSI), we resort to a recent result in Shannon theory. It is shown how to exploit this theoretical result to find the long-term utility region and determine good power control policies. This region is of prime importance since it provides the best performance possible for a given knowledge at the nodes. Numerical results provide several new insights into the repeated forwarder's dilemma power control problem; for instance, the knowledge of global CSI only brings a marginal performance improvement with respect to the local CSI case.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Opportunistic interference alignment approach in device-to-device communications underlaying cellular networks.\n \n \n \n \n\n\n \n Li, X.; Ren, Y.; Gao, H.; and Lv, T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1838-1842, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OpportunisticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760566,\n  author = {X. Li and Y. Ren and H. Gao and T. Lv},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Opportunistic interference alignment approach in device-to-device communications underlaying cellular networks},\n  year = {2016},\n  pages = {1838-1842},\n  abstract = {We consider Device-to-Device (D2D) communications underlaying cellular networks with multiple cellular users (CUs) and multi-pair direct D2D users (DUs). In this considered scenario, we propose a novel opportunistic interference alignment (OIA) scheme to improve the sum rate of the whole networks. DUs are opportunistically selected based on the reference signal space (RSS) of the CUs. According to the RSS, the DUs can calculate the scheduling metric which represents the strength of D2D interference to CUs. Then, we select the DUs according to the computing result. Unlike most of the existing methods, the proposed OIA scheme relies on the local channel state information, which is more realistic in practice. In addition, in view of the angle between the interference caused by each DU and the RSS, several scheduling strategies are designed to enhance the performance. Numerical results show that the proposed scheme can improve the performance of the network.},\n  keywords = {cellular radio;multi-access systems;radiofrequency interference;telecommunication scheduling;wireless channels;device-to-device communication;cellular network;D2D communication;multiple cellular user;CU;multipair direct user;DU;opportunistic interference alignment scheme;OIA scheme;sum rate improvement;reference signal space;RSS;D2D interference strength;local channel state information;scheduling strategy;Device-to-device communication;Interference;Cellular networks;Copper;Receivers;Measurement;Transmitters},\n  doi = {10.1109/EUSIPCO.2016.7760566},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570258626.pdf},\n}\n\n
\n
\n\n\n
\n We consider Device-to-Device (D2D) communications underlaying cellular networks with multiple cellular users (CUs) and multi-pair direct D2D users (DUs). In this considered scenario, we propose a novel opportunistic interference alignment (OIA) scheme to improve the sum rate of the whole networks. DUs are opportunistically selected based on the reference signal space (RSS) of the CUs. According to the RSS, the DUs can calculate the scheduling metric which represents the strength of D2D interference to CUs. Then, we select the DUs according to the computing result. Unlike most of the existing methods, the proposed OIA scheme relies on the local channel state information, which is more realistic in practice. In addition, in view of the angle between the interference caused by each DU and the RSS, several scheduling strategies are designed to enhance the performance. Numerical results show that the proposed scheme can improve the performance of the network.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Active noise cancellation in headphones by digital robust feedback control.\n \n \n \n \n\n\n \n Liebich, S.; Anemüller, C.; Vary, P.; Jax, P.; Rüschen, D.; and Leonhardt, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1843-1847, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ActivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760567,\n  author = {S. Liebich and C. Anemüller and P. Vary and P. Jax and D. Rüschen and S. Leonhardt},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Active noise cancellation in headphones by digital robust feedback control},\n  year = {2016},\n  pages = {1843-1847},\n  abstract = {Active Noise Cancellation (ANC) may complement passive insulation of headphones by actively cancelling low frequency components of acoustical background noise. In systems with a single error microphone pointing towards the ear canal, a feedback controller performs the compensation task. We are focusing on fixed feedback controllers for broadband attenuation of arbitrary ambient noise. We use methods and optimization routines from control theory. In this discipline the key element is the so-called controller, which is in terms of signal processing a (digital) filter. The controller is designed by an optimization approach called the mixed-sensitivity H∞ synthesis, which requires an accurate estimate of the secondary path between the cancelling loudspeaker and the error microphone, and the knowledge of the secondary path uncertainties as well as a specification of the closed-loop sensitivity. The advantage of this method is the convenient formulation of performance and uncertainty requirements in the frequency domain. We describe the design process and evaluate the controller, which is realized in state space form, within a real-time system. The real-time measurements show a good match with the expected behavior. They furthermore confirm the feasibility of broadband attenuation by fixed i.e. time invariant feedback controllers in a digital system. The novelty of this contribution comprises of the specific design process of a discrete robust feedback controller for broadband noise reduction (roughly 250 Hz) and the digital real-time system implementation.},\n  keywords = {active noise control;closed loop systems;digital control;discrete systems;headphones;optimal control;optimisation;robust control;digital real time system;broadband noise reduction;discrete robust feedback control;closed loop sensitivity;secondary path uncertainties;error microphone;mixed sensitivity H∞ synthesis;control theory;optimization routines;digital robust feedback control;headphone active noise cancellation;Uncertainty;Sensitivity;Headphones;Attenuation;Optimization;Frequency measurement;Adaptive control},\n  doi = {10.1109/EUSIPCO.2016.7760567},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251613.pdf},\n}\n\n
\n
\n\n\n
\n Active Noise Cancellation (ANC) may complement passive insulation of headphones by actively cancelling low frequency components of acoustical background noise. In systems with a single error microphone pointing towards the ear canal, a feedback controller performs the compensation task. We are focusing on fixed feedback controllers for broadband attenuation of arbitrary ambient noise. We use methods and optimization routines from control theory. In this discipline the key element is the so-called controller, which is in terms of signal processing a (digital) filter. The controller is designed by an optimization approach called the mixed-sensitivity H∞ synthesis, which requires an accurate estimate of the secondary path between the cancelling loudspeaker and the error microphone, and the knowledge of the secondary path uncertainties as well as a specification of the closed-loop sensitivity. The advantage of this method is the convenient formulation of performance and uncertainty requirements in the frequency domain. We describe the design process and evaluate the controller, which is realized in state space form, within a real-time system. The real-time measurements show a good match with the expected behavior. They furthermore confirm the feasibility of broadband attenuation by fixed i.e. time invariant feedback controllers in a digital system. The novelty of this contribution comprises of the specific design process of a discrete robust feedback controller for broadband noise reduction (roughly 250 Hz) and the digital real-time system implementation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analysis of the quantization error in digital multipliers with small wordlength.\n \n \n \n \n\n\n \n Dehner, G.; Dehner, I.; Rabenstein, R.; Schäfer, M.; and Strobl, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1848-1852, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AnalysisPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760568,\n  author = {G. Dehner and I. Dehner and R. Rabenstein and M. Schäfer and C. Strobl},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Analysis of the quantization error in digital multipliers with small wordlength},\n  year = {2016},\n  pages = {1848-1852},\n  abstract = {The analysis of the quantization error in fixed-point arithmetic is usually based on simplifying assumptions. The quantization error is modelled as a random variable which is independent of the quantized variable. This contribution investigates the wordlength reduction of a digital multiplier in greater detail. The power spectrum of the quantization is expressed by the power spectrum of the multiplier input. The analytical results agree with measurements of the quantization error. The presented error model is shown to be superior to the simplified one for wordlengths in the range of eight bit.},\n  keywords = {fixed point arithmetic;quantisation (signal);power spectrum;wordlength reduction;quantized variable;random variable;fixed-point arithmetic;small wordlength;digital multipliers;quantization error;Quantization (signal);Correlation;Europe;Analytical models;Indexes;Gaussian distribution;Finite wordlength effects;error analysis;quantization;digital arithmetic;Gaussian processes},\n  doi = {10.1109/EUSIPCO.2016.7760568},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251794.pdf},\n}\n\n
\n
\n\n\n
\n The analysis of the quantization error in fixed-point arithmetic is usually based on simplifying assumptions. The quantization error is modelled as a random variable which is independent of the quantized variable. This contribution investigates the wordlength reduction of a digital multiplier in greater detail. The power spectrum of the quantization is expressed by the power spectrum of the multiplier input. The analytical results agree with measurements of the quantization error. The presented error model is shown to be superior to the simplified one for wordlengths in the range of eight bit.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A new area-efficient FIR filter design algorithm by dynamic programming.\n \n \n \n \n\n\n \n Zhao, J.; Wang, Y.; Chen, J.; and Feng, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1853-1856, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760569,\n  author = {J. Zhao and Y. Wang and J. Chen and F. Feng},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A new area-efficient FIR filter design algorithm by dynamic programming},\n  year = {2016},\n  pages = {1853-1856},\n  abstract = {Finite impulse response (FIR) digital filter is a commonly adopted signal processing unit in digital signal processing, due to its stability and easy implementation for linear phase response. To reduce the area complexity and power consumption of the filters, researchers have proposed algorithms to optimize the expression of coefficients with the reduced number of non-zero digits or power-of-two terms. This paper presents a new optimization algorithm based on elegant dynamic programming approach to minimize the number of non-zero digits in coefficient set, which yields to low logic operator cost in the filter circuit implementation. The proposed algorithm utilized the knapsack method from dynamic programming to effectively remove the redundant nonzero digits in the coefficients. Experiment results on benchmark filters show that the proposed algorithm can synthesize FIR filter coefficient with the reduced area complexity. Compared with two competing methods, the proposed algorithm can design FIR filters with an average of 14.49% to 47.73% reduced full adder cost over the competing methods.},\n  keywords = {dynamic programming;FIR filters;area-efficient FIR filter design algorithm;finite impulse response digital filter;signal processing unit;digital signal processing;area complexity reduction;power consumption reduction;nonzero digits;power-of-two terms;optimization algorithm;dynamic programming approach;low logic operator cost;filter circuit implementation;knapsack method;linear phase response;Finite impulse response filters;Filtering algorithms;Algorithm design and analysis;Signal processing algorithms;Adders;Dynamic programming;Complexity theory;finite impulse response (FIR) filter;dynamic programming;digital signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760569},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252174.pdf},\n}\n\n
\n
\n\n\n
\n Finite impulse response (FIR) digital filter is a commonly adopted signal processing unit in digital signal processing, due to its stability and easy implementation for linear phase response. To reduce the area complexity and power consumption of the filters, researchers have proposed algorithms to optimize the expression of coefficients with the reduced number of non-zero digits or power-of-two terms. This paper presents a new optimization algorithm based on elegant dynamic programming approach to minimize the number of non-zero digits in coefficient set, which yields to low logic operator cost in the filter circuit implementation. The proposed algorithm utilized the knapsack method from dynamic programming to effectively remove the redundant nonzero digits in the coefficients. Experiment results on benchmark filters show that the proposed algorithm can synthesize FIR filter coefficient with the reduced area complexity. Compared with two competing methods, the proposed algorithm can design FIR filters with an average of 14.49% to 47.73% reduced full adder cost over the competing methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rapid digital architecture design of orthogonal matching pursuit.\n \n \n \n \n\n\n \n Knoop, B.; Rust, J.; Schmale, S.; Peters-Drolshagen, D.; and Paul, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1857-1861, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RapidPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760570,\n  author = {B. Knoop and J. Rust and S. Schmale and D. Peters-Drolshagen and S. Paul},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Rapid digital architecture design of orthogonal matching pursuit},\n  year = {2016},\n  pages = {1857-1861},\n  abstract = {Orthogonal Matching Pursuit (OMP) is a greedy algorithm well-known for its applications to Compressed Sensing. For this work it serves as a toy problem of a rapid digital design flow based on high-level synthesis (HLS). HLS facilitates extensive design space exploration in connection with a data type-agnostic programming methodology. Nonetheless, some algorithmic transformations are needed to obtain optimised digital architectures. OMP contains a least squares orthogonalisation step, yet its iterative selection strategy makes rank-1 updating possible. We furthermore propose to compute complex mathematical operations, e.g. the needed reciprocal square root operation, with the help of the logarithmic number system to optimise HLS results. Our results can compete with prior works in terms of latency and resource utilisation. Additionally and to the best of our knowledge, we can report on the first complex-valued digital architecture of OMP, which is able to recover a vector of length 128 with 5 non-zero elements in 17.1 μs.},\n  keywords = {compressed sensing;greedy algorithms;high level synthesis;iterative methods;least squares approximations;time-frequency analysis;vectors;digital architecture design;orthogonal matching pursuit;OMP;greedy algorithm;compressed sensing;rapid digital design flow;high-level synthesis;HLS;design space exploration;data type-agnostic programming methodology;algorithmic transformations;least squares orthogonalisation step;iterative selection strategy;reciprocal square root operation;logarithmic number system;resource utilisation;time 17.1 mus;Matching pursuit algorithms;Signal processing algorithms;Computer architecture;Hardware;Algorithm design and analysis;Indexes;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760570},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256202.pdf},\n}\n\n
\n
\n\n\n
\n Orthogonal Matching Pursuit (OMP) is a greedy algorithm well-known for its applications to Compressed Sensing. For this work it serves as a toy problem of a rapid digital design flow based on high-level synthesis (HLS). HLS facilitates extensive design space exploration in connection with a data type-agnostic programming methodology. Nonetheless, some algorithmic transformations are needed to obtain optimised digital architectures. OMP contains a least squares orthogonalisation step, yet its iterative selection strategy makes rank-1 updating possible. We furthermore propose to compute complex mathematical operations, e.g. the needed reciprocal square root operation, with the help of the logarithmic number system to optimise HLS results. Our results can compete with prior works in terms of latency and resource utilisation. Additionally and to the best of our knowledge, we can report on the first complex-valued digital architecture of OMP, which is able to recover a vector of length 128 with 5 non-zero elements in 17.1 μs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Resource-constrained implementation and optimization of a deep neural network for vehicle classification.\n \n \n \n \n\n\n \n Xie, R.; Huttunen, H.; Lin, S.; Bhattacharyya, S. S.; and Takala, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1862-1866, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Resource-constrainedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760571,\n  author = {R. Xie and H. Huttunen and S. Lin and S. S. Bhattacharyya and J. Takala},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Resource-constrained implementation and optimization of a deep neural network for vehicle classification},\n  year = {2016},\n  pages = {1862-1866},\n  abstract = {Deep learning has attracted great research interest in recent years in many signal processing application areas. However, investigation of deep learning implementations in highly resource-constrained contexts has been relatively unexplored due to the large computational requirements involved. In this paper, we investigate the implementation of a deep learning application for vehicle classification on multicore platforms with limited numbers of available processor cores. We apply model-based design methods based on signal processing oriented dataflow models of computation, and using the resulting dataflow representations, we apply various design optimizations to derive efficient implementations on three different multicore platforms. Using model-based design techniques throughout the design process, we demonstrate the ability to flexibly experiment with optimizing design transformations, and alternative multicore target platforms to achieve efficient implementations that are tailored to the resource constraints of these platforms.},\n  keywords = {data flow computing;image classification;learning (artificial intelligence);multiprocessing systems;optimisation;resource-constrained implementation;resource-constrained optimization;deep neural network;alternative multicore target platforms;resulting dataflow representations;signal processing oriented dataflow models;model-based design methods;processor cores;computational requirements;deep learning application;vehicle classification;Multicore processing;Vehicles;Convolution;Machine learning;Computational modeling;Kernel;Dataflow;deep learning;model-based design;multicore platforms;signal processing systems},\n  doi = {10.1109/EUSIPCO.2016.7760571},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256408.pdf},\n}\n\n
\n
\n\n\n
\n Deep learning has attracted great research interest in recent years in many signal processing application areas. However, investigation of deep learning implementations in highly resource-constrained contexts has been relatively unexplored due to the large computational requirements involved. In this paper, we investigate the implementation of a deep learning application for vehicle classification on multicore platforms with limited numbers of available processor cores. We apply model-based design methods based on signal processing oriented dataflow models of computation, and using the resulting dataflow representations, we apply various design optimizations to derive efficient implementations on three different multicore platforms. Using model-based design techniques throughout the design process, we demonstrate the ability to flexibly experiment with optimizing design transformations, and alternative multicore target platforms to achieve efficient implementations that are tailored to the resource constraints of these platforms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Penalizing local correlations in the residual improves image denoising performance.\n \n \n \n \n\n\n \n Riot, P.; Almansa, A.; Gousseau, Y.; and Tupin, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1867-1871, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"PenalizingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760572,\n  author = {P. Riot and A. Almansa and Y. Gousseau and F. Tupin},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Penalizing local correlations in the residual improves image denoising performance},\n  year = {2016},\n  pages = {1867-1871},\n  abstract = {In this work, we address the problem of denoising an image corrupted by an additive white Gaussian noise. This hypothesis on the noise, despite being very common and justified as the result of a variance normalization step, is hardly used by classical denoising methods. Indeed, very few methods directly constrain the whiteness of the residual (the removed noise). We propose a new variational approach defining generic fidelity terms to locally control the residual distribution using the statistical moments and the correlation on patches. Using different regularizations such as TV or a nonlocal regularization, our approach achieves better performances than the L2 fidelity, with better texture and contrast preservation.},\n  keywords = {AWGN;image denoising;image denoising performance;local correlations;additive white Gaussian noise;statistical moments;L2 fidelity;Correlation;Noise reduction;Cost function;Estimation;Noise measurement;TV;Europe;Image denoising;White noise;Cost function;Probability distribution},\n  doi = {10.1109/EUSIPCO.2016.7760572},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252205.pdf},\n}\n\n
\n
\n\n\n
\n In this work, we address the problem of denoising an image corrupted by an additive white Gaussian noise. This hypothesis on the noise, despite being very common and justified as the result of a variance normalization step, is hardly used by classical denoising methods. Indeed, very few methods directly constrain the whiteness of the residual (the removed noise). We propose a new variational approach defining generic fidelity terms to locally control the residual distribution using the statistical moments and the correlation on patches. Using different regularizations such as TV or a nonlocal regularization, our approach achieves better performances than the L2 fidelity, with better texture and contrast preservation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-image super-resolution for fisheye video sequences using subpixel motion estimation based on calibrated re-projection.\n \n \n \n \n\n\n \n Bätz, M.; Eichenseer, A.; and Kaup, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1872-1876, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-imagePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760573,\n  author = {M. Bätz and A. Eichenseer and A. Kaup},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-image super-resolution for fisheye video sequences using subpixel motion estimation based on calibrated re-projection},\n  year = {2016},\n  pages = {1872-1876},\n  abstract = {Super-resolution techniques are a means for reconstructing a higher spatial resolution from low resolution content, which is especially important for automotive or surveillance systems. Furthermore, being able to capture a large area with a single camera can be realized by using ultra-wide angle lenses, as employed in so-called fisheye cameras. However, the underlying non-perspective projection function of fisheye cameras introduces significant radial distortion, which is not considered by conventional super-resolution techniques. In this paper, we therefore propose the integration of a fisheye-adapted motion estimation approach that is based on a calibrated re-projection into a multi-image super-resolution framework. The proposed method is capable of taking the fisheye characteristics into account, thus improving the reconstruction quality. Simulation results show an average gain in luminance PSNR of up to 0.3 dB for upscaling factors of 2 and 4. Visual examples substantiate the objective results.},\n  keywords = {image reconstruction;image sequences;motion estimation;video cameras;video surveillance;upscaling factors;luminance PSNR;reconstruction quality improvement;fisheye-adapted motion estimation;radial distortion;fisheye cameras;ultra-wide angle lenses;surveillance systems;automotive systems;spatial resolution;calibrated reprojection;subpixel motion estimation;fisheye video sequences;multiimage super-resolution;Cameras;Signal resolution;Motion estimation;Spatial resolution;Interpolation;Image reconstruction},\n  doi = {10.1109/EUSIPCO.2016.7760573},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252209.pdf},\n}\n\n
\n
\n\n\n
\n Super-resolution techniques are a means for reconstructing a higher spatial resolution from low resolution content, which is especially important for automotive or surveillance systems. Furthermore, being able to capture a large area with a single camera can be realized by using ultra-wide angle lenses, as employed in so-called fisheye cameras. However, the underlying non-perspective projection function of fisheye cameras introduces significant radial distortion, which is not considered by conventional super-resolution techniques. In this paper, we therefore propose the integration of a fisheye-adapted motion estimation approach that is based on a calibrated re-projection into a multi-image super-resolution framework. The proposed method is capable of taking the fisheye characteristics into account, thus improving the reconstruction quality. Simulation results show an average gain in luminance PSNR of up to 0.3 dB for upscaling factors of 2 and 4. Visual examples substantiate the objective results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic selection of stochastic watershed hierarchies.\n \n \n \n \n\n\n \n Fehri, A.; Velasco-Forero, S.; and Meyer, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1877-1881, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760574,\n  author = {A. Fehri and S. Velasco-Forero and F. Meyer},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic selection of stochastic watershed hierarchies},\n  year = {2016},\n  pages = {1877-1881},\n  abstract = {The segmentation, seen as the association of a partition with an image, is a difficult task. It can be decomposed in two steps: at first, a family of contours associated with a series of nested partitions (or hierarchy) is created and organized, then pertinent contours are extracted. A coarser partition is obtained by merging adjacent regions of a finer partition. The strength of a contour is then measured by the level of the hierarchy for which its two adjacent regions merge. We present an automatic segmentation strategy using a wide range of stochastic watershed hierarchies. For a given set of homogeneous images, our approach selects automatically the best hierarchy and cut level to perform image simplification given an evaluation score. Experimental results illustrate the advantages of our approach on several real-life images datasets.},\n  keywords = {feature extraction;image segmentation;stochastic processes;stochastic watershed hierarchies;automatic segmentation;homogeneous images;image simplification;Image segmentation;Image edge detection;Vegetation;Cost accounting;Training;Joining processes;Europe;Mathematical Morphology;Hierarchies;Segmentation;Stochastic Watershed},\n  doi = {10.1109/EUSIPCO.2016.7760574},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252225.pdf},\n}\n\n
\n
\n\n\n
\n The segmentation, seen as the association of a partition with an image, is a difficult task. It can be decomposed in two steps: at first, a family of contours associated with a series of nested partitions (or hierarchy) is created and organized, then pertinent contours are extracted. A coarser partition is obtained by merging adjacent regions of a finer partition. The strength of a contour is then measured by the level of the hierarchy for which its two adjacent regions merge. We present an automatic segmentation strategy using a wide range of stochastic watershed hierarchies. For a given set of homogeneous images, our approach selects automatically the best hierarchy and cut level to perform image simplification given an evaluation score. Experimental results illustrate the advantages of our approach on several real-life images datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Lipreading using spatiotemporal histogram of oriented gradients.\n \n \n \n \n\n\n \n Paleček, K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1882-1885, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"LipreadingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760575,\n  author = {K. Paleček},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Lipreading using spatiotemporal histogram of oriented gradients},\n  year = {2016},\n  pages = {1882-1885},\n  abstract = {We propose a visual speech parametrization based on histogram of oriented gradients (HOG) for the task of lipreading from frontal face videos. Inspired by the success of spatiotemporal local binary patterns, the features are designed to capture dynamic information contained in the input video sequence by combining HOG descriptors extracted from three orthogonal planes that span x, y and t axes. We integrate our features into a system based on hidden Markov model (HMM) and show that by utilizing robust and properly tuned parametrization this traditional scheme can outperform recent sophisticated embedding approaches to lipreading. We perform experiments on three different datasets, two of which are publicly available. In order to conduct an unbiased feature comparison, the process of model learning including hyperparameter tuning is as automatized as possible. To this end, we rely heavily on cross validation.},\n  keywords = {hidden Markov models;speech synthesis;lipreading;spatiotemporal histogram;histogram of oriented gradients;HOG descriptors;hidden Markov model;HMM;parametrization;Feature extraction;Hidden Markov models;Videos;Histograms;Speech;Principal component analysis;Spatiotemporal phenomena},\n  doi = {10.1109/EUSIPCO.2016.7760575},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252277.pdf},\n}\n\n
\n
\n\n\n
\n We propose a visual speech parametrization based on histogram of oriented gradients (HOG) for the task of lipreading from frontal face videos. Inspired by the success of spatiotemporal local binary patterns, the features are designed to capture dynamic information contained in the input video sequence by combining HOG descriptors extracted from three orthogonal planes that span x, y and t axes. We integrate our features into a system based on hidden Markov model (HMM) and show that by utilizing robust and properly tuned parametrization this traditional scheme can outperform recent sophisticated embedding approaches to lipreading. We perform experiments on three different datasets, two of which are publicly available. In order to conduct an unbiased feature comparison, the process of model learning including hyperparameter tuning is as automatized as possible. To this end, we rely heavily on cross validation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Deep action classification via matrix completion.\n \n \n \n \n\n\n \n Bomma, S.; and Robertson, N. M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1886-1890, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DeepPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760576,\n  author = {S. Bomma and N. M. Robertson},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Deep action classification via matrix completion},\n  year = {2016},\n  pages = {1886-1890},\n  abstract = {Matrix completion is the task of predicting unknown or missing entries in a data matrix. The estimation of the missing entries is based on the assumption that the underlying matrix is a low rank one. Deep learning has evolved as an efficient tool for feature extraction in many large-scale image based applications. Exploiting the techniques from both domains, we propose a novel solution to the problem of simultaneous classification of actions from multiple test videos with deep features using matrix completion methods. Learned features from a convolutional neural network and corresponding labels from data are concatenated to form a big matrix with unknown or missing entries in the place of test data labels. Convex rank minimization algorithms are used to complete this matrix. The proposed method achieves stable performance even in situations with more than 50% of features and labels missing.},\n  keywords = {convex programming;feature extraction;image classification;learning (artificial intelligence);matrix algebra;minimisation;neural nets;video signal processing;deep action classification;matrix completion;unknown entries;convex rank minimization algorithms;test data labels;convolutional neural network;test videos;simultaneous action classification;large-scale image based applications;feature extraction;deep learning;data matrix;missing entries;Convolution;Training;Videos;Feature extraction;Machine learning;Neural networks;Minimization},\n  doi = {10.1109/EUSIPCO.2016.7760576},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252285.pdf},\n}\n\n
\n
\n\n\n
\n Matrix completion is the task of predicting unknown or missing entries in a data matrix. The estimation of the missing entries is based on the assumption that the underlying matrix is a low rank one. Deep learning has evolved as an efficient tool for feature extraction in many large-scale image based applications. Exploiting the techniques from both domains, we propose a novel solution to the problem of simultaneous classification of actions from multiple test videos with deep features using matrix completion methods. Learned features from a convolutional neural network and corresponding labels from data are concatenated to form a big matrix with unknown or missing entries in the place of test data labels. Convex rank minimization algorithms are used to complete this matrix. The proposed method achieves stable performance even in situations with more than 50% of features and labels missing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Performance evaluation of high dynamic range image tone mapping operators based on separable non-linear multiresolution families.\n \n \n \n \n\n\n \n Thai, B. C.; Mokraoui, A.; and Matei, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1891-1895, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"PerformancePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760577,\n  author = {B. C. Thai and A. Mokraoui and B. Matei},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Performance evaluation of high dynamic range image tone mapping operators based on separable non-linear multiresolution families},\n  year = {2016},\n  pages = {1891-1895},\n  abstract = {This paper addresses the conversion problem of High Dynamic Range (HDR) images into Low Dynamic Range (LDR) images. In this objective, separable non-linear multiresolution approaches are exploited as Image Tone Mapping Operators (TMOs). They are related on: (i) Essentially Non-Oscillatory (ENO) interpolation strategy developed by Harten namely Point-Value (PV) multiresolution family and Cell-Average (CA) multiresolution family; and (ii) Power-P multiresolution family introduced by Amat. These approaches have the advantage to take into account the singularities, such as edge points of the image, in the mathematical model thus preserving the structural information of the HDR images. Moreover the Gibbs phenomenon, harmful in tone mapped images, is avoided. The quality assessment of the tone mapped images is measured according to the TMQI metric. Simulation results show that the proposed TMOs provide good results compared to traditional TMO strategies.},\n  keywords = {image resolution;interpolation;high dynamic range image tone mapping operator performance evaluation;separable nonlinear multiresolution families;HDR image conversion problem;low dynamic range image;LDR image;image TMO;essentially nonoscillatory interpolation strategy;ENO interpolation strategy;point-value multiresolution family;cell-average multiresolution family;Power-P multiresolution family;mathematical model;Gibbs phenomenon;TMQI metric;Image resolution;Signal resolution;Interpolation;Dynamic range;Image edge detection;Mathematical model;High dynamic range;Tone mapping;Essentially non-oscillatory interpolation;Non-linear multiresolution;Point-value and cell-average multiresolution;Power-P multiresolution},\n  doi = {10.1109/EUSIPCO.2016.7760577},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252305.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the conversion problem of High Dynamic Range (HDR) images into Low Dynamic Range (LDR) images. In this objective, separable non-linear multiresolution approaches are exploited as Image Tone Mapping Operators (TMOs). They are related on: (i) Essentially Non-Oscillatory (ENO) interpolation strategy developed by Harten namely Point-Value (PV) multiresolution family and Cell-Average (CA) multiresolution family; and (ii) Power-P multiresolution family introduced by Amat. These approaches have the advantage to take into account the singularities, such as edge points of the image, in the mathematical model thus preserving the structural information of the HDR images. Moreover the Gibbs phenomenon, harmful in tone mapped images, is avoided. The quality assessment of the tone mapped images is measured according to the TMQI metric. Simulation results show that the proposed TMOs provide good results compared to traditional TMO strategies.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An efficient image retrieval method under dithered-based quantization scheme.\n \n \n \n \n\n\n \n Chaker, A.; Kaaniche, M.; Benazza-Benyahia, A.; and Antonini, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1896-1900, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760578,\n  author = {A. Chaker and M. Kaaniche and A. Benazza-Benyahia and M. Antonini},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {An efficient image retrieval method under dithered-based quantization scheme},\n  year = {2016},\n  pages = {1896-1900},\n  abstract = {Recent research works have shown the impact of the quantization techniques on the performance of standard image retrieval systems when datasets are compressed in a lossy mode. In this work, we propose to design an efficient retrieval method well adapted to wavelet-based compressed images. Our objective is to recover features of the original image (herein the moments of the unquantized subbands) directly from the quantized coefficients. To this end, we propose to apply a dithered quantization technique satisfying some specific conditions. Then, the estimated moments of the wavelet subbands are used in an appropriate way to construct the feature vectors of the database images. Experimental results show the interest of the proposed image retrieval method compared to the state-of-the-art ones.},\n  keywords = {data compression;feature extraction;image coding;image retrieval;quantisation (signal);image retrieval method;dataset compression;lossy mode;image feature recovery;dithered quantization technique;wavelet subband moment estimation;Quantization (signal);Image coding;Bit rate;Feature extraction;Image retrieval;Standards;Content based image retrieval;compressed images;dithering quantization;feature extraction},\n  doi = {10.1109/EUSIPCO.2016.7760578},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252324.pdf},\n}\n\n
\n
\n\n\n
\n Recent research works have shown the impact of the quantization techniques on the performance of standard image retrieval systems when datasets are compressed in a lossy mode. In this work, we propose to design an efficient retrieval method well adapted to wavelet-based compressed images. Our objective is to recover features of the original image (herein the moments of the unquantized subbands) directly from the quantized coefficients. To this end, we propose to apply a dithered quantization technique satisfying some specific conditions. Then, the estimated moments of the wavelet subbands are used in an appropriate way to construct the feature vectors of the database images. Experimental results show the interest of the proposed image retrieval method compared to the state-of-the-art ones.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The CNN news footage datasets: Enabling supervision in image retrieval.\n \n \n \n \n\n\n \n Bilen, Ç.; Zepeda, J.; and Pérez, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1901-1905, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760579,\n  author = {Ç. Bilen and J. Zepeda and P. Pérez},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {The CNN news footage datasets: Enabling supervision in image retrieval},\n  year = {2016},\n  pages = {1901-1905},\n  abstract = {Image retrieval in large image databases is an important problem that drives a number of applications. Yet the use of supervised approaches that address this problem has been limited due to the lack of large labeled datasets for training. Hence, in this paper we introduce two new datasets composed of images extracted from publicly available videos from the Cable News Network (CNN). The proposed datasets are particularly suited to supervised learning for image retrieval and are larger than any other existing dataset of a similar nature. The datasets are further provided with a set of pre-computed, state-of-the-art image feature vectors, as well as baseline results. In order to facilitate research in this important topic, we also detail a generic, supervised learning formulation for image retrieval and a related stochastic solver.},\n  keywords = {image retrieval;information resources;learning (artificial intelligence);stochastic processes;vectors;video databases;video signal processing;image retrieval;image databases;image extraction;cable news network;supervised learning;image feature vectors;stochastic solver;CNN news footage datasets;Image retrieval;Training;Manganese;Videos;Feature extraction;Supervised learning;Measurement},\n  doi = {10.1109/EUSIPCO.2016.7760579},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252101.pdf},\n}\n\n
\n
\n\n\n
\n Image retrieval in large image databases is an important problem that drives a number of applications. Yet the use of supervised approaches that address this problem has been limited due to the lack of large labeled datasets for training. Hence, in this paper we introduce two new datasets composed of images extracted from publicly available videos from the Cable News Network (CNN). The proposed datasets are particularly suited to supervised learning for image retrieval and are larger than any other existing dataset of a similar nature. The datasets are further provided with a set of pre-computed, state-of-the-art image feature vectors, as well as baseline results. In order to facilitate research in this important topic, we also detail a generic, supervised learning formulation for image retrieval and a related stochastic solver.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evaluation of graph metrics for optimizing bin-based ontologically smoothed language models.\n \n \n \n \n\n\n \n Benahmed, Y.; Selouani, S.; and O'Shaughnessy, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1906-1910, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EvaluationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760580,\n  author = {Y. Benahmed and S. Selouani and D. O'Shaughnessy},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Evaluation of graph metrics for optimizing bin-based ontologically smoothed language models},\n  year = {2016},\n  pages = {1906-1910},\n  abstract = {This paper investigates the use of graph metrics to further enhance the performance of a language model smoothing algorithm. Bin-Based Ontological Smoothing has been successfully used to improve language model performance in automatic speech recognition tasks. It uses ontologies to estimate novel utterances for a given language model. Since ontologies can be represented as graphs, we investigate the use of graph metrics as an additional smoothing factor in order to capture additional semantic or relational information found in ontologies. More specifically, we investigate the effect of HITS, PageRank, Modularity, and weighted degree, on performance. The entire power set of bins is evaluated. Our results show that the interpolation of the original bins at distances 1, 3 and 5 resulted in an improvement in WER of 0.71% relative over the interpolation of bins 1 to 5. Furthermore, modularity, PageRank and HITS show promise for further study.},\n  keywords = {graph theory;interpolation;ontologies (artificial intelligence);smoothing methods;speech recognition;bin-based ontologically smoothed language model optimization;Graph metric evaluation;automatic speech recognition task;utterance estimation;smoothing factor;HITS effect;PageRank;original bin interpolation;Smoothing methods;Ontologies;Measurement;Signal processing algorithms;Predictive models;Algorithm design and analysis;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760580},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570248671.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates the use of graph metrics to further enhance the performance of a language model smoothing algorithm. Bin-Based Ontological Smoothing has been successfully used to improve language model performance in automatic speech recognition tasks. It uses ontologies to estimate novel utterances for a given language model. Since ontologies can be represented as graphs, we investigate the use of graph metrics as an additional smoothing factor in order to capture additional semantic or relational information found in ontologies. More specifically, we investigate the effect of HITS, PageRank, Modularity, and weighted degree, on performance. The entire power set of bins is evaluated. Our results show that the interpolation of the original bins at distances 1, 3 and 5 resulted in an improvement in WER of 0.71% relative over the interpolation of bins 1 to 5. Furthermore, modularity, PageRank and HITS show promise for further study.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Speaker-aware long short-term memory multi-task learning for speech recognition.\n \n \n \n \n\n\n \n Pironkov, G.; Dupont, S.; and Dutoit, T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1911-1915, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Speaker-awarePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760581,\n  author = {G. Pironkov and S. Dupont and T. Dutoit},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Speaker-aware long short-term memory multi-task learning for speech recognition},\n  year = {2016},\n  pages = {1911-1915},\n  abstract = {In order to address the commonly met issue of overfitting in speech recognition, this article investigates Multi-Task Learning, when the auxiliary task focuses on speaker classification. Overfitting occurs when the amount of training data is limited, leading to an over-sensible acoustic model. Multi-Task Learning is a method, among many other regularization methods, which decreases the overfitting impact by forcing the acoustic model to train jointly for multiple different, but related, tasks. In this paper, we consider speaker classification as an auxiliary task in order to improve the generalization abilities of the acoustic model, by training the model to recognize the speaker, or find the closest one inside the training set. We investigate this Multi-Task Learning setup on the TIMIT database, while the acoustic modeling is performed using a Recurrent Neural Network with Long Short-Term Memory cells.},\n  keywords = {learning (artificial intelligence);recurrent neural nets;signal classification;speech recognition;speaker-aware long short-term memory multitask learning;speech recognition;speaker classification;over-sensible acoustic model;acoustic model;TIMIT database;recurrent neural network;Training;Acoustics;Speech recognition;Hidden Markov models;Machine learning;Speech;Feature extraction},\n  doi = {10.1109/EUSIPCO.2016.7760581},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252203.pdf},\n}\n\n
\n
\n\n\n
\n In order to address the commonly met issue of overfitting in speech recognition, this article investigates Multi-Task Learning, when the auxiliary task focuses on speaker classification. Overfitting occurs when the amount of training data is limited, leading to an over-sensible acoustic model. Multi-Task Learning is a method, among many other regularization methods, which decreases the overfitting impact by forcing the acoustic model to train jointly for multiple different, but related, tasks. In this paper, we consider speaker classification as an auxiliary task in order to improve the generalization abilities of the acoustic model, by training the model to recognize the speaker, or find the closest one inside the training set. We investigate this Multi-Task Learning setup on the TIMIT database, while the acoustic modeling is performed using a Recurrent Neural Network with Long Short-Term Memory cells.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimation of DOA and phase error using a partly calibrated sensor array with arbitrary geometry.\n \n \n \n \n\n\n \n Zhang, X.; He, Z.; Liao, B.; Qian, J.; and Xie, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1916-1920, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EstimationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760582,\n  author = {X. Zhang and Z. He and B. Liao and J. Qian and J. Xie},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Estimation of DOA and phase error using a partly calibrated sensor array with arbitrary geometry},\n  year = {2016},\n  pages = {1916-1920},\n  abstract = {A new method is presented to effectively estimate the direction-of-arrival (DOA) of a source signal and the phase errors of a sensor array with arbitrary geometry. Assuming that one sensor (except the reference one) has been calibrated, the proposed method appropriately reconstruct the data matrix and establish a series of linear equations with respect to the unknown parameters through eigenvalue decomposition (EVD). We build a LS problem with a quadratic constraint and solve it by two approaches. Unlike the conventional methods which are limited to specific array geometries, the proposed can be applied to arbitrary arrays. Moreover, it only requires one calibrated sensor, which may not be consecutively spaced to the reference one. The effectiveness of the proposed method is validated by simulation results.},\n  keywords = {direction-of-arrival estimation;eigenvalues and eigenfunctions;least squares approximations;matrix algebra;sensor arrays;direction-of-arrival estimation;DOA estimation;phase error estimation;partly calibrated sensor array;arbitrary geometry;source signal;data matrix;linear equations;unknown parameters;eigenvalue decomposition;LS problem;quadratic constraint;Direction-of-arrival estimation;Estimation;Sensor arrays;Calibration;Phased arrays;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760582},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570254646.pdf},\n}\n\n
\n
\n\n\n
\n A new method is presented to effectively estimate the direction-of-arrival (DOA) of a source signal and the phase errors of a sensor array with arbitrary geometry. Assuming that one sensor (except the reference one) has been calibrated, the proposed method appropriately reconstruct the data matrix and establish a series of linear equations with respect to the unknown parameters through eigenvalue decomposition (EVD). We build a LS problem with a quadratic constraint and solve it by two approaches. Unlike the conventional methods which are limited to specific array geometries, the proposed can be applied to arbitrary arrays. Moreover, it only requires one calibrated sensor, which may not be consecutively spaced to the reference one. The effectiveness of the proposed method is validated by simulation results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Decentralized direction-of-arrival estimation for arbitrary array geometries.\n \n \n \n \n\n\n \n Suleiman, W.; Vaheed, A. A.; Pesavento, M.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1921-1925, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DecentralizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760583,\n  author = {W. Suleiman and A. A. Vaheed and M. Pesavento and A. M. Zoubir},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Decentralized direction-of-arrival estimation for arbitrary array geometries},\n  year = {2016},\n  pages = {1921-1925},\n  abstract = {Direction-of-arrival (DOA) estimation in partly calibrated array composed of multiple fully calibrated subarrays is considered. The location of the sensors in the subarrays are assumed to be arbitrary, i.e., no specific subarray geometry is assumed. Using array interpolation, we extend the previously proposed decentralized ESPRIT algorithm (d-ESPRIT), originally designed for shift-invariance array geometries, to arbitrary array geometries. In our proposed algorithm, the array interpolation is carried out locally at the subarrays, thus, communication between the subarrays is required for DOA estimation but not for interpolation. Simulation results demonstrate that our proposed algorithm achieves better performance than the conventional ESPRIT algorithm in perturbed shift-invariance arrays.},\n  keywords = {array signal processing;direction-of-arrival estimation;interpolation;decentralized direction-of-arrival estimation;DOA estimation;multiple-fully-calibrated subarrays;sensor location;array interpolation;decentralized ESPRIT algorithm;d-ESPRIT;shift-invariance array geometry;arbitrary array geometry;perturbed shift-invariance arrays;Sensor arrays;Direction-of-arrival estimation;Estimation;Signal processing algorithms;Geometry;Protocols},\n  doi = {10.1109/EUSIPCO.2016.7760583},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255741.pdf},\n}\n\n
\n
\n\n\n
\n Direction-of-arrival (DOA) estimation in partly calibrated array composed of multiple fully calibrated subarrays is considered. The location of the sensors in the subarrays are assumed to be arbitrary, i.e., no specific subarray geometry is assumed. Using array interpolation, we extend the previously proposed decentralized ESPRIT algorithm (d-ESPRIT), originally designed for shift-invariance array geometries, to arbitrary array geometries. In our proposed algorithm, the array interpolation is carried out locally at the subarrays, thus, communication between the subarrays is required for DOA estimation but not for interpolation. Simulation results demonstrate that our proposed algorithm achieves better performance than the conventional ESPRIT algorithm in perturbed shift-invariance arrays.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Off-grid target detection with normalized matched subspace filter.\n \n \n \n \n\n\n \n Rabaste, O.; Bosse, J.; and Ovarlez, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1926-1930, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Off-gridPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760584,\n  author = {O. Rabaste and J. Bosse and J. Ovarlez},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Off-grid target detection with normalized matched subspace filter},\n  year = {2016},\n  pages = {1926-1930},\n  abstract = {The problem of off-grid target detection with the normalized matched filter (NMF) detector is considered. We show that this detector is highly sensitive to off-grid targets. In particular its mean asymptotic detection probability may not converge to 1. We then consider two solutions to solve this off-grid problem. The first solution approximates the Generalized Likelihood Ratio Test (GLRT) by oversampling the resolution cell; this solution may be computationally heavy and does not permit to compute a theoretical detection threshold. We then propose a second solution based on the matched subspace detection framework. For Doppler steering vectors, the subspace considered is deduced from Discrete Prolate Spheroidal Sequence vectors. Simulation results permit to demonstrate interesting performance for off-grid targets.},\n  keywords = {Doppler radar;matched filters;object detection;radar detection;radar resolution;signal detection;normalized matched subspace filter;off-grid target detection problem;NMF detector;mean asymptotic detection probability;generalized likelihood ratio test;GLRT;resolution cell oversampling;matched subspace detection framework;Doppler steering vector;discrete prolate spheroidal sequence vector;Detectors;Doppler effect;Testing;Signal resolution;Gaussian noise;Object detection;Image edge detection;Off-grid;normalized matched filter;matched subspace detector;discrete prolate spheroidal sequences},\n  doi = {10.1109/EUSIPCO.2016.7760584},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256252.pdf},\n}\n\n
\n
\n\n\n
\n The problem of off-grid target detection with the normalized matched filter (NMF) detector is considered. We show that this detector is highly sensitive to off-grid targets. In particular its mean asymptotic detection probability may not converge to 1. We then consider two solutions to solve this off-grid problem. The first solution approximates the Generalized Likelihood Ratio Test (GLRT) by oversampling the resolution cell; this solution may be computationally heavy and does not permit to compute a theoretical detection threshold. We then propose a second solution based on the matched subspace detection framework. For Doppler steering vectors, the subspace considered is deduced from Discrete Prolate Spheroidal Sequence vectors. Simulation results permit to demonstrate interesting performance for off-grid targets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Direction-of-arrival estimation and Cramer-Rao bound for multi-carrier MIMO radar.\n \n \n \n \n\n\n \n Ulrich, M.; Rambach, K.; and Yang, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1931-1935, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Direction-of-arrivalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760585,\n  author = {M. Ulrich and K. Rambach and B. Yang},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Direction-of-arrival estimation and Cramer-Rao bound for multi-carrier MIMO radar},\n  year = {2016},\n  pages = {1931-1935},\n  abstract = {Multi-carrier (MC) multiple-input multiple-output (MIMO) radar was recently applied to build sparse virtual arrays with a large aperture for a high-accuracy direction-of-arrival (DOA) estimation. The resulting grating lobes (DOA ambiguities) were resolved using multiple carriers. One problem of MC-MIMO is the coupling of the unknown parameters range and DOA. In this contribution, we study this range-DOA coupling for MC-MIMO systems. We consider both Cramer-Rao bound (CRB) of these parameters and their estimation. We show that a suitable choice of the coordinate system decouples range and DOA parameters in both CRB and estimation. This enables a sequential range and DOA estimation instead of a more complex joint estimation. Explanations of this phenomenon are given and simulation results confirm the theoretical findings.},\n  keywords = {direction-of-arrival estimation;MIMO radar;direction-of-arrival estimation;Cramer-Rao bound;multicarrier MIMO radar;multiple-input multiple-output radar;sparse virtual arrays;range-DOA coupling;MC-MIMO systems;grating lobes;CRB;DOA parameters;sequential range;Direction-of-arrival estimation;Estimation;Radar;Couplings;Radar antennas;MIMO},\n  doi = {10.1109/EUSIPCO.2016.7760585},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256305.pdf},\n}\n\n
\n
\n\n\n
\n Multi-carrier (MC) multiple-input multiple-output (MIMO) radar was recently applied to build sparse virtual arrays with a large aperture for a high-accuracy direction-of-arrival (DOA) estimation. The resulting grating lobes (DOA ambiguities) were resolved using multiple carriers. One problem of MC-MIMO is the coupling of the unknown parameters range and DOA. In this contribution, we study this range-DOA coupling for MC-MIMO systems. We consider both Cramer-Rao bound (CRB) of these parameters and their estimation. We show that a suitable choice of the coordinate system decouples range and DOA parameters in both CRB and estimation. This enables a sequential range and DOA estimation instead of a more complex joint estimation. Explanations of this phenomenon are given and simulation results confirm the theoretical findings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Radar target recognition via 2-D sparse linear prediction in missing data case.\n \n \n \n \n\n\n \n Ozen, B.; and Erer, I.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1936-1940, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RadarPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760586,\n  author = {B. Ozen and I. Erer},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Radar target recognition via 2-D sparse linear prediction in missing data case},\n  year = {2016},\n  pages = {1936-1940},\n  abstract = {Classical linear prediction methods based on least-square estimation yields radar images with high side lobes and many spurious scattering centers while singular value decomposition (SVD) truncation used to address these issues decreases the dynamic range of the image. So, radar images provided by these methods are not appropriate for classification purposes. In this work, sparsity constraints are induced on the prediction coefficients. The classification results demonstrate that the proposed sparse linear prediction methods give better accuracy rates compared to Multiple Signal Classification (MUSIC) method conventionally used for limited bandwidth-observation angle data. Classification performances of proposed methods are also investigated in case of the missing backscattered data. It is shown that the proposed methods are not affected from the missing data unlike the MUSIC method whose performance decreases with the increase in the percentage of the missing data.},\n  keywords = {least squares approximations;object recognition;radar target recognition;singular value decomposition;MUSIC;multiple signal classification;singular value decomposition;radar images;least-square estimation;classical linear prediction;missing data case;2-D sparse linear prediction;radar target recognition;Scattering;Radar imaging;Narrowband;Multiple signal classification;Prediction methods;Prediction algorithms;radar imaging;sparse linear prediction;BPDN;LASSO;BPDN with regularization term;missing data;classification},\n  doi = {10.1109/EUSIPCO.2016.7760586},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256308.pdf},\n}\n\n
\n
\n\n\n
\n Classical linear prediction methods based on least-square estimation yields radar images with high side lobes and many spurious scattering centers while singular value decomposition (SVD) truncation used to address these issues decreases the dynamic range of the image. So, radar images provided by these methods are not appropriate for classification purposes. In this work, sparsity constraints are induced on the prediction coefficients. The classification results demonstrate that the proposed sparse linear prediction methods give better accuracy rates compared to Multiple Signal Classification (MUSIC) method conventionally used for limited bandwidth-observation angle data. Classification performances of proposed methods are also investigated in case of the missing backscattered data. It is shown that the proposed methods are not affected from the missing data unlike the MUSIC method whose performance decreases with the increase in the percentage of the missing data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the impact of signals time-frequency sparsity on the localization performance.\n \n \n \n \n\n\n \n Boudjellal, A.; Nguyen, V. D.; Abed-Meraim, K.; Belouchrani, A.; and Ravier, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1941-1945, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760587,\n  author = {A. Boudjellal and V. D. Nguyen and K. Abed-Meraim and A. Belouchrani and P. Ravier},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On the impact of signals time-frequency sparsity on the localization performance},\n  year = {2016},\n  pages = {1941-1945},\n  abstract = {In this paper, we investigate the localization performance of far field sources that have sparse time-frequency (T-F) representations. The Cramér-Rao Bound (CRB) under the sparsity assumption is developed and the impact of the T-F sparsity prior on the localization performance is analyzed. In particular, one studies how the different T-F sparsity properties i.e. local SNR level, source supports spreading and source overlapping and orthogonality affect the CRB of the Direction-of-Arrival (DoA) estimation. The obtained results show that the sources T-F orthogonality has the most significant impact on the localization performance. Simulation results are provided to illustrate the concluding remarks made out of this study.},\n  keywords = {direction-of-arrival estimation;time-frequency analysis;signal time-frequency sparsity;localization performance;T-F orthogonality;DoA estimation;direction-of-arrival estimation;source overlapping;local SNR level;T-F sparsity properties;sparsity assumption;Cramer-Rao bound;sparse T-F representation;sparse time-frequency representation;far field sources;Time-frequency analysis;Direction-of-arrival estimation;MONOS devices;Signal to noise ratio;Estimation;Europe;CRB;Time-Frequency;Sparsity;DoA},\n  doi = {10.1109/EUSIPCO.2016.7760587},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256672.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we investigate the localization performance of far field sources that have sparse time-frequency (T-F) representations. The Cramér-Rao Bound (CRB) under the sparsity assumption is developed and the impact of the T-F sparsity prior on the localization performance is analyzed. In particular, one studies how the different T-F sparsity properties i.e. local SNR level, source supports spreading and source overlapping and orthogonality affect the CRB of the Direction-of-Arrival (DoA) estimation. The obtained results show that the sources T-F orthogonality has the most significant impact on the localization performance. Simulation results are provided to illustrate the concluding remarks made out of this study.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rhythm transcription of MIDI performances based on hierarchical Bayesian modelling of repetition and modification of musical note patterns.\n \n \n \n \n\n\n \n Nakamura, E.; Itoyama, K.; and Yoshii, K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1946-1950, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RhythmPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760588,\n  author = {E. Nakamura and K. Itoyama and K. Yoshii},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Rhythm transcription of MIDI performances based on hierarchical Bayesian modelling of repetition and modification of musical note patterns},\n  year = {2016},\n  pages = {1946-1950},\n  abstract = {This paper presents a method of rhythm transcription (i.e., automatic recognition of note values in music performance signals) based on a Bayesian music language model that describes the repetitive structure of musical notes. Conventionally, music language models for music transcription are trained with a dataset of musical pieces. Because typical musical pieces have repetitions consisting of a limited number of note patterns, better models fitting individual pieces could be obtained by inducing compact grammars. The main challenges are inducing appropriate grammar for a score that is observed indirectly through a performance and capturing incomplete repetitions, which can be represented as repetitions with modifications. We propose a hierarchical Bayesian model in which the generation of a language model is described with a Dirichlet process and the production of musical notes is described with a hierarchical hidden Markov model (HMM) that incorporates the process of modifying note patterns. We derive an efficient algorithm based on Gibbs sampling for simultaneously inferring from a performance signal the score and the individual language model behind it. Evaluations showed that the proposed model outperformed previously studied HMM-based models.},\n  keywords = {audio signal processing;belief networks;hidden Markov models;music;musical instruments;natural language processing;sampling methods;rhythm transcription;MIDI performances;hierarchical Bayesian modelling;musical note patterns;note values recognition;music performance signals;Bayesian music language model;music language models;music transcription;compact grammar inducing;Dirichlet process;hidden Markov model;HMM;Gibbs sampling;Hidden Markov models;Music;Grammar;Markov processes;Bayes methods;Bars;Multiple signal classification},\n  doi = {10.1109/EUSIPCO.2016.7760588},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570243591.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a method of rhythm transcription (i.e., automatic recognition of note values in music performance signals) based on a Bayesian music language model that describes the repetitive structure of musical notes. Conventionally, music language models for music transcription are trained with a dataset of musical pieces. Because typical musical pieces have repetitions consisting of a limited number of note patterns, better models fitting individual pieces could be obtained by inducing compact grammars. The main challenges are inducing appropriate grammar for a score that is observed indirectly through a performance and capturing incomplete repetitions, which can be represented as repetitions with modifications. We propose a hierarchical Bayesian model in which the generation of a language model is described with a Dirichlet process and the production of musical notes is described with a hierarchical hidden Markov model (HMM) that incorporates the process of modifying note patterns. We derive an efficient algorithm based on Gibbs sampling for simultaneously inferring from a performance signal the score and the individual language model behind it. Evaluations showed that the proposed model outperformed previously studied HMM-based models.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-class learning algorithm for deep neural network-based statistical parametric speech synthesis.\n \n \n \n \n\n\n \n Song, E.; and Kang, H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1951-1955, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-classPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760589,\n  author = {E. Song and H. Kang},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-class learning algorithm for deep neural network-based statistical parametric speech synthesis},\n  year = {2016},\n  pages = {1951-1955},\n  abstract = {This paper proposes a multi-class learning (MCL) algorithm for a deep neural network (DNN)-based statistical parametric speech synthesis (SPSS) system. Although the DNN-based SPSS system improves the modeling accuracy of statistical parameters, its synthesized speech is often muffled because the training process only considers the global characteristics of the entire set of training data, but does not explicitly consider any local variations. We introduce a DNN-based context clustering algorithm that implicitly divides the training data into several classes, and train them via a shared hidden layer-based MCL algorithm. Since the proposed MCL method efficiently models both the universal and class-dependent characteristics of various phonetic information, it not only avoids the model over-fitting problem but also reduces the over-smoothing effect. Objective and subjective test results also verify that the proposed algorithm performs much better than the conventional method.},\n  keywords = {learning (artificial intelligence);neural nets;speech synthesis;multiclass learning algorithm;deep neural network-based statistical parametric speech synthesis;MCL algorithm;DNN-based SPSS system;statistical parameters;training process;training data global characteristics;DNN-based context clustering algorithm;shared hidden layer-based MCL algorithm;class-dependent characteristics;universal characteristics;phonetic information;model over-fitting problem;over-smoothing effect;Signal processing algorithms;Training;Hidden Markov models;Speech;Acoustics;Context;Clustering algorithms;Statistical parametric speech synthesis;deep neural network;context clustering;shared hidden layer},\n  doi = {10.1109/EUSIPCO.2016.7760589},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570245860.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a multi-class learning (MCL) algorithm for a deep neural network (DNN)-based statistical parametric speech synthesis (SPSS) system. Although the DNN-based SPSS system improves the modeling accuracy of statistical parameters, its synthesized speech is often muffled because the training process only considers the global characteristics of the entire set of training data, but does not explicitly consider any local variations. We introduce a DNN-based context clustering algorithm that implicitly divides the training data into several classes, and train them via a shared hidden layer-based MCL algorithm. Since the proposed MCL method efficiently models both the universal and class-dependent characteristics of various phonetic information, it not only avoids the model over-fitting problem but also reduces the over-smoothing effect. Objective and subjective test results also verify that the proposed algorithm performs much better than the conventional method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Single-channel noise reduction in the STFT domain from the fullband output SNR perspective.\n \n \n \n \n\n\n \n Zhao, Y.; Benesty, J.; and Chen, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1956-1959, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Single-channelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760590,\n  author = {Y. Zhao and J. Benesty and J. Chen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Single-channel noise reduction in the STFT domain from the fullband output SNR perspective},\n  year = {2016},\n  pages = {1956-1959},\n  abstract = {This paper develops a single-channel noise reduction algorithm in the short-time Fourier transform (STFT) domain, which attempts to optimize the fullband output signal-to-noise ratio (SNR). We show that the conventional Wiener filter, the maximum SNR filter, and the ideal binary mask based method are particular cases of the developed algorithm. Simulations are presented to illustrate the properties of this algorithm.},\n  keywords = {Fourier transforms;Wiener filters;maximum SNR filter;Wiener filter;ideal binary mask-based method;short-time Fourier transform domain;STFT domain;single-channel noise reduction algorithm;fullband output signal-to-noise ratio;fullband output SNR;Signal to noise ratio;Optical noise;Noise reduction;Speech;Signal processing algorithms;Europe;Noise reduction;speech enhancement;STFT domain;optimal gains;fullband output SNR;ideal binary mask},\n  doi = {10.1109/EUSIPCO.2016.7760590},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250305.pdf},\n}\n\n
\n
\n\n\n
\n This paper develops a single-channel noise reduction algorithm in the short-time Fourier transform (STFT) domain, which attempts to optimize the fullband output signal-to-noise ratio (SNR). We show that the conventional Wiener filter, the maximum SNR filter, and the ideal binary mask based method are particular cases of the developed algorithm. Simulations are presented to illustrate the properties of this algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multiple layer model for object detection and sketch representation.\n \n \n \n \n\n\n \n Li, W.; Wu, X.; Cai, L.; Hu, F.; and Zhao, Y.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1960-1964, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MultiplePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760591,\n  author = {W. Li and X. Wu and L. Cai and F. Hu and Y. Zhao},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Multiple layer model for object detection and sketch representation},\n  year = {2016},\n  pages = {1960-1964},\n  abstract = {In this paper we propose a multiple layer model for object detection and sketch representation. Unlike most traditional detection models focusing on the object localization, we investigate both the object detection and sketch representation within an unified framework. Based on the multiple layer architecture, our model can provide the sketch information of the detected object. Meanwhile, we generalize it from single scale structure to multiple scales, which efficiently saves time consumed in the image pyramids construction. To efficiently train the classifier at the top layer, we employ the stochastic gradient descent algorithm to minimize the training error and back propagate it to the bottom layer. The experimental results demonstrate that our model outperforms the conventional active basis model.},\n  keywords = {backpropagation;gradient methods;image classification;image representation;minimisation;object detection;stochastic programming;sketch representation;object detection;multiple layer model;object localization;single scale structure;image pyramid construction;stochastic gradient descent algorithm;training error minimization;backpropagation;Training;Signal processing algorithms;Convolution;Object detection;Feature extraction;Support vector machines;Europe;Object Detection;stochastic gradient descent;Multiple layer model;Sketch Representation},\n  doi = {10.1109/EUSIPCO.2016.7760591},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252150.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we propose a multiple layer model for object detection and sketch representation. Unlike most traditional detection models focusing on the object localization, we investigate both the object detection and sketch representation within an unified framework. Based on the multiple layer architecture, our model can provide the sketch information of the detected object. Meanwhile, we generalize it from single scale structure to multiple scales, which efficiently saves time consumed in the image pyramids construction. To efficiently train the classifier at the top layer, we employ the stochastic gradient descent algorithm to minimize the training error and back propagate it to the bottom layer. The experimental results demonstrate that our model outperforms the conventional active basis model.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient eye corner and gaze detection for sclera recognition under relaxed imaging constraints.\n \n \n \n \n\n\n \n Alkassar, S.; Woo, W. L.; Dlay, S. S.; and Chambers, J. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1965-1969, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760592,\n  author = {S. Alkassar and W. L. Woo and S. S. Dlay and J. A. Chambers},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient eye corner and gaze detection for sclera recognition under relaxed imaging constraints},\n  year = {2016},\n  pages = {1965-1969},\n  abstract = {Sclera recognition has provoked research interest recently due to the distinctive properties of its blood vessels. However, segmenting noisy sclera areas in eye images under relaxed imaging constraints, such as different gaze directions, capturing on-the-move and at-a-distance, has not been extensively investigated. In our previous work, we proposed a novel method for sclera segmentation under unconstrained image conditions with a drawback being that the eye gaze direction is manually labeled for each image. Therefore, we propose a robust method for automatic eye corner and gaze detection. The proposed method involves two levels of eye corners verification to minimize eye corner point misclassification when noisy eye images are introduced. Moreover, gaze direction estimation is achieved through the pixel properties of the sclera area. Experimental results in on-the-move and at-a-distance contexts with multiple eye gaze directions using the UBIRIS.v2 database show a significant improvement in terms of accuracy and gaze detection rates.},\n  keywords = {biomedical optical imaging;blood vessels;eye;gaze tracking;image classification;image denoising;image segmentation;medical image processing;efficient eye corner;sclera recognition;blood vessels;noisy sclera area segmentation;relaxed imaging constraints;sclera segmentation;unconstrained image condition;automatic eye corner;eye corner verification;eye corner point misclassification;noisy eye images;gaze direction estimation;pixel properties;on-the-move context;at-a-distance context;multiple eye gaze direction;UBIRIS.v2 database;gaze detection rate;Eyelids;Skin;Estimation;Iris;Europe;Signal processing;Databases},\n  doi = {10.1109/EUSIPCO.2016.7760592},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252346.pdf},\n}\n\n
\n
\n\n\n
\n Sclera recognition has provoked research interest recently due to the distinctive properties of its blood vessels. However, segmenting noisy sclera areas in eye images under relaxed imaging constraints, such as different gaze directions, capturing on-the-move and at-a-distance, has not been extensively investigated. In our previous work, we proposed a novel method for sclera segmentation under unconstrained image conditions with a drawback being that the eye gaze direction is manually labeled for each image. Therefore, we propose a robust method for automatic eye corner and gaze detection. The proposed method involves two levels of eye corners verification to minimize eye corner point misclassification when noisy eye images are introduced. Moreover, gaze direction estimation is achieved through the pixel properties of the sclera area. Experimental results in on-the-move and at-a-distance contexts with multiple eye gaze directions using the UBIRIS.v2 database show a significant improvement in terms of accuracy and gaze detection rates.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Error robust low delay audio coding using spherical logarithmic quantization.\n \n \n \n \n\n\n \n Preihs, S.; Lamprecht, T.; and Ostermann, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1970-1974, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ErrorPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760593,\n  author = {S. Preihs and T. Lamprecht and J. Ostermann},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Error robust low delay audio coding using spherical logarithmic quantization},\n  year = {2016},\n  pages = {1970-1974},\n  abstract = {This paper reveals the potential gain in audio quality that can be achieved by combining Spherical Logarithmic Quantization (SLQ) with advanced broadband error robust low delay audio coding based on ADPCM. We briefly summarize the basic properties and mechanisms of SLQ and the employed ADPCM scheme and show how they can be combined in a freely parameterizable coding algorithm. The resulting codec includes techniques for error robustness and a shaping of the coding noise. We present results of optimizing the codec parameters in our framework for global optimization based on psychoacoustic measures. Our evaluation shows that by using SLQ instead of scalar quantization a PEAQ ODG-score improvement with a maximum of about 1 point and a mean of 0.2 can be achieved. An analysis of the bit-error behavior of the combined SLQ-ADPCM shows that a major improvement in bit-error performance results from our proposed efficient error detection and processing.},\n  keywords = {audio coding;optimisation;quantisation (signal);error robust low delay audio coding;spherical logarithmic quantization;audio quality;ADPCM;freely parameterizable coding algorithm;error robustness;coding noise;codec parameters;global optimization;psychoacoustic measures;scalar quantization;PEAQ ODG-score improvement;bit-error behavior;bit-error performance;error detection;error processing;Quantization (signal);Delays;Robustness;Codecs;Audio coding;Indexes;Low delay audio coding;Spherical Logarithmic Quantization;ADPCM;global optimization;PEAQ},\n  doi = {10.1109/EUSIPCO.2016.7760593},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250421.pdf},\n}\n\n
\n
\n\n\n
\n This paper reveals the potential gain in audio quality that can be achieved by combining Spherical Logarithmic Quantization (SLQ) with advanced broadband error robust low delay audio coding based on ADPCM. We briefly summarize the basic properties and mechanisms of SLQ and the employed ADPCM scheme and show how they can be combined in a freely parameterizable coding algorithm. The resulting codec includes techniques for error robustness and a shaping of the coding noise. We present results of optimizing the codec parameters in our framework for global optimization based on psychoacoustic measures. Our evaluation shows that by using SLQ instead of scalar quantization a PEAQ ODG-score improvement with a maximum of about 1 point and a mean of 0.2 can be achieved. An analysis of the bit-error behavior of the combined SLQ-ADPCM shows that a major improvement in bit-error performance results from our proposed efficient error detection and processing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Angle of arrival estimation in dynamic indoor THz channels with Bayesian filter and reinforcement learning.\n \n \n \n \n\n\n \n Peng, B.; Jiao, Q.; and Kürner, T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1975-1979, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AnglePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760594,\n  author = {B. Peng and Q. Jiao and T. Kürner},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Angle of arrival estimation in dynamic indoor THz channels with Bayesian filter and reinforcement learning},\n  year = {2016},\n  pages = {1975-1979},\n  abstract = {This paper presents a novel algorithm to estimate the Angle of Arrival (AoA) in a dynamic indoor Terahertz channel. In a realistic application, the user equipment is often moved by the user during the data transmission and the AoA must be estimated periodically, such that the adaptive directional antenna can be adjusted to realize a high antenna gain. The Bayesian filter is applied to exploit continuity and smoothness of the channel dynamics for the AoA estimation. Reinforcement learning is introduced to adapt the prior transition probabilities between system states, in order to fit the variation of application scenarios and personal habits. The algorithm is validated using the ray launching channel simulator and realistic human movement models.},\n  keywords = {belief networks;directive antennas;filtering theory;learning (artificial intelligence);probability;adaptive directional antenna;ray launching channel simulator;prior transition probabilities;AoA estimation;high antenna gain;reinforcement learning;Bayesian filter;dynamic indoor THz channels;angle of arrival estimation;Estimation;Learning (artificial intelligence);Bayes methods;Directive antennas;Gain;Azimuth;Terahertz communication;angle of arrival estimation;dynamic channel;Bayesian filter;reinforcement learning},\n  doi = {10.1109/EUSIPCO.2016.7760594},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250715.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a novel algorithm to estimate the Angle of Arrival (AoA) in a dynamic indoor Terahertz channel. In a realistic application, the user equipment is often moved by the user during the data transmission and the AoA must be estimated periodically, such that the adaptive directional antenna can be adjusted to realize a high antenna gain. The Bayesian filter is applied to exploit continuity and smoothness of the channel dynamics for the AoA estimation. Reinforcement learning is introduced to adapt the prior transition probabilities between system states, in order to fit the variation of application scenarios and personal habits. The algorithm is validated using the ray launching channel simulator and realistic human movement models.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n 3D vs. 2D channel capacity of outdoor to indoor scenarios derived from measurements in China and New Zealand.\n \n \n \n\n\n \n Yu, Y.; Zhang, J.; and Shafi, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1980-1984, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760595,\n  author = {Y. Yu and J. Zhang and M. Shafi},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {3D vs. 2D channel capacity of outdoor to indoor scenarios derived from measurements in China and New Zealand},\n  year = {2016},\n  pages = {1980-1984},\n  abstract = {Three dimensional (3D) Multiple Input Multiple Output (MIMO) is considered as one key technology for 5th Generation mobile communication systems. However, can the capacity derived from a 3D channel impulse response (CIR) better approximate the corresponding capacity derived from the measured CIR? Is there a substantial improvement in the capacity of 3D MIMO system in comparison to the classical 2D case? To answer these questions, extensive field measurements were conducted for the outdoor to indoor scenarios in China and New Zealand. Key channel parameters are extracted from the measured channel impulse response via the space-alternating generalized expectation-maximization algorithm, followed by the reconstruction of the 32 × 56 CIR according to 2D and 3D channel model. Then CIRs are used to evaluate the channel capacity and comparative simulation results demonstrate that the 3D MIMO capacity closely approximates the capacity predicted with practical CIRs. Moreover, obvious enhancements in the capacity of 3D channel is observed in comparison to 2D case. In addition, the distribution of eigenvalues and the number of contributing eigenvalues are also investigated.},\n  keywords = {5G mobile communication;channel capacity;eigenvalues and eigenfunctions;expectation-maximisation algorithm;indoor radio;MIMO communication;wireless channels;3D channel capacity;2D channel capacity;outdoor scenario;indoor scenario;China;New Zealand;three dimensional multiple input multiple output system;5th generation mobile communication system;3D channel impulse response;3D MIMO system capacity;key channel parameter extraction;space-alternating generalized expectation-maximization algorithm;CIR reconstruction;eigenvalue distribution;Three-dimensional displays;Antenna measurements;Two dimensional displays;Antenna arrays;MIMO;Receiving antennas;Transmitting antennas},\n  doi = {10.1109/EUSIPCO.2016.7760595},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n Three dimensional (3D) Multiple Input Multiple Output (MIMO) is considered as one key technology for 5th Generation mobile communication systems. However, can the capacity derived from a 3D channel impulse response (CIR) better approximate the corresponding capacity derived from the measured CIR? Is there a substantial improvement in the capacity of 3D MIMO system in comparison to the classical 2D case? To answer these questions, extensive field measurements were conducted for the outdoor to indoor scenarios in China and New Zealand. Key channel parameters are extracted from the measured channel impulse response via the space-alternating generalized expectation-maximization algorithm, followed by the reconstruction of the 32 × 56 CIR according to 2D and 3D channel model. Then CIRs are used to evaluate the channel capacity and comparative simulation results demonstrate that the 3D MIMO capacity closely approximates the capacity predicted with practical CIRs. Moreover, obvious enhancements in the capacity of 3D channel is observed in comparison to 2D case. In addition, the distribution of eigenvalues and the number of contributing eigenvalues are also investigated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Full dimension MIMO for frequency division duplex under signaling and feedback constraints.\n \n \n \n \n\n\n \n Kurras, M.; Thiele, L.; Haustein, T.; Lei, W.; and Yan, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1985-1989, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"FullPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760596,\n  author = {M. Kurras and L. Thiele and T. Haustein and W. Lei and C. Yan},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Full dimension MIMO for frequency division duplex under signaling and feedback constraints},\n  year = {2016},\n  pages = {1985-1989},\n  abstract = {It is a open research problem of great interest how to fully utilize the benefits of massive MIMO original designed for Time Division Duplex (TDD) also in Frequency-Division Duplexing (FDD). For efficient multi-user downlink transmission Channel State Information (CSI) has to be obtained at the receiver side and fed back to the transmitter. Due to signaling/pilot overhead scaling with the number of antennas and the feedback of the CSI the beamforming gain and spatial multiplexing gains of massive MIMO are limited. In this paper we present a novel scheme called User Non-aware Precoding and Effective Channel Estimation (UNP & ECE) for full dimension MIMO in FDD under signaling and feedback constraints. The idea is to construct a large codebook and distribute subsets of it on time-frequency resources as precoded pilots. The same precoders are also used for downlink transmission thus no additional demodulation reference signals are required. This limits the number of signaling overhead and a high quantization of the channel can be utilized. To limit the feedback we show that it is sufficient to report only the strongest stream index of a precoder subset for only a subset of Time Frequency (TF) resources. By sending just the index of the best stream the devices don't require knowledge of the applied precoders. This enables optimization of the codebook construction, the splitting in subsets, the distribution on the TF resources at the base station without awareness at the users.},\n  keywords = {antenna arrays;array signal processing;channel estimation;feedback;MIMO communication;multiuser channels;optimisation;precoding;quantisation (signal);telecommunication signalling;time-frequency analysis;wireless channels;full dimension MIMO;frequency division duplex;feedback constraint;signaling constraint;TDD;FDD;multiuser downlink transmission channel state information;signaling-pilot overhead scaling;CSI feedback;beamforming gain;massive MIMO spatial multiplexing gain;user nonaware precoding;UNP;ECE;effective channel estimation;time-frequency resource;signaling overhead;channel quantization;TF resource;codebook construction optimization;MIMO;Interference;Precoding;Time-frequency analysis;Downlink;Channel estimation;Discrete Fourier transforms},\n  doi = {10.1109/EUSIPCO.2016.7760596},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255095.pdf},\n}\n\n
\n
\n\n\n
\n It is a open research problem of great interest how to fully utilize the benefits of massive MIMO original designed for Time Division Duplex (TDD) also in Frequency-Division Duplexing (FDD). For efficient multi-user downlink transmission Channel State Information (CSI) has to be obtained at the receiver side and fed back to the transmitter. Due to signaling/pilot overhead scaling with the number of antennas and the feedback of the CSI the beamforming gain and spatial multiplexing gains of massive MIMO are limited. In this paper we present a novel scheme called User Non-aware Precoding and Effective Channel Estimation (UNP & ECE) for full dimension MIMO in FDD under signaling and feedback constraints. The idea is to construct a large codebook and distribute subsets of it on time-frequency resources as precoded pilots. The same precoders are also used for downlink transmission thus no additional demodulation reference signals are required. This limits the number of signaling overhead and a high quantization of the channel can be utilized. To limit the feedback we show that it is sufficient to report only the strongest stream index of a precoder subset for only a subset of Time Frequency (TF) resources. By sending just the index of the best stream the devices don't require knowledge of the applied precoders. This enables optimization of the codebook construction, the splitting in subsets, the distribution on the TF resources at the base station without awareness at the users.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust 3D MIMO-OFDM channel estimation with hybrid analog-digital architecture.\n \n \n \n \n\n\n \n Destino, G.; Saloranta, J.; Juntti, M.; and Nagaraj, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1990-1994, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760597,\n  author = {G. Destino and J. Saloranta and M. Juntti and S. Nagaraj},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust 3D MIMO-OFDM channel estimation with hybrid analog-digital architecture},\n  year = {2016},\n  pages = {1990-1994},\n  abstract = {We consider the problem of 3D multiple-input-multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) channel estimation, a key for the future development of 3D beamforming techniques. Our main contribution is a novel algorithm, namely the adaptive-LASSO, that can jointly exploit the sparsity structure of the MIMO-OFDM channel in the spatial and delay domains. The algorithm is designed to handle large antenna arrays by means of a hybrid analog-digital architecture. In this regard, we describe an effective beam-switching strategy to sample the channel using a few analog beamformers. We investigate the impact of the signal bandwidth, antenna structures, line-of-sight (LOS) and non-line-of-sight (NLOS) conditions via ray-tracing based simulations. Also, we show that the A-LASSO can provide significant improvements with respect to the legacy methods, e.g. least-square technique.},\n  keywords = {antenna arrays;array signal processing;channel estimation;MIMO communication;OFDM modulation;ray tracing;A-LASSO algorithm;ray-tracing based simulations;NLOS conditions;nonline-of-sight conditions;LOS conditions;line-of-sight conditions;antenna structures;signal bandwidth impact;analog beamformers;beam-switching strategy;large antenna arrays;delay domains;spatial domains;sparsity structure;adaptive-LASSO algorithm;3D beamforming;orthogonal frequency-division multiplexing channel estimation;3D multiple-input multiple-output channel estimation;hybrid analog-digital architecture;robust 3D MIMO-OFDM channel estimation;Channel estimation;OFDM;Dictionaries;Three-dimensional displays;Receiving antennas;Analog-digital conversion},\n  doi = {10.1109/EUSIPCO.2016.7760597},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256138.pdf},\n}\n\n
\n
\n\n\n
\n We consider the problem of 3D multiple-input-multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) channel estimation, a key for the future development of 3D beamforming techniques. Our main contribution is a novel algorithm, namely the adaptive-LASSO, that can jointly exploit the sparsity structure of the MIMO-OFDM channel in the spatial and delay domains. The algorithm is designed to handle large antenna arrays by means of a hybrid analog-digital architecture. In this regard, we describe an effective beam-switching strategy to sample the channel using a few analog beamformers. We investigate the impact of the signal bandwidth, antenna structures, line-of-sight (LOS) and non-line-of-sight (NLOS) conditions via ray-tracing based simulations. Also, we show that the A-LASSO can provide significant improvements with respect to the legacy methods, e.g. least-square technique.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evaluating the spatial resolution of 2D antenna arrays for massive MIMO transmissions.\n \n \n \n \n\n\n \n Ademaj, F.; Taranetz, M.; and Rupp, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1995-1999, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EvaluatingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760598,\n  author = {F. Ademaj and M. Taranetz and M. Rupp},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Evaluating the spatial resolution of 2D antenna arrays for massive MIMO transmissions},\n  year = {2016},\n  pages = {1995-1999},\n  abstract = {Massive MIMO has been identified as one of the key technologies for the 5th generation of mobile cellular networks. By utilizing 2D antenna arrays with a large number of antenna elements, it enables to form orthogonal beams towards spatially separated User Equipments (UEs). In this paper we evaluate the channel energy and the average throughput of typical indoor- and outdoor UEs at various heights and distances to the macro site. Our goal is to demonstrate the achievable spatial resolution of the beamforming in vertical direction with large antenna arrays. Existing work on directional beamforming strategies is commonly based on simplistic signal propagation assumptions under LOS conditions. This paper considers a realistic 3D channel model that also accounts for multi-path propagation under NLOS conditions. Our results exhibit the dependency of the achievable spatial resolution on both, the size of the antenna array as well as the channel conditions. They show that depending on whether the UE is located indoors or outdoors, the channel has an opposing impact on the achievable spatial resolution.},\n  keywords = {5G mobile communication;antenna arrays;array signal processing;cellular radio;MIMO communication;multipath propagation;directional beamforming strategies;average throughput;channel energy;user equipments;mobile cellular networks;5th generation;massive MIMO transmissions;large 2D antenna arrays;spatial resolution;Receiving antennas;Antenna measurements;Linear antenna arrays;Spatial resolution;Two dimensional displays;3GPP;3D beamforming;spatial resolution;3GPP 3D channel model;antenna array;elevation;azimuth;massive MIMO;vertical sectorization},\n  doi = {10.1109/EUSIPCO.2016.7760598},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256341.pdf},\n}\n\n
\n
\n\n\n
\n Massive MIMO has been identified as one of the key technologies for the 5th generation of mobile cellular networks. By utilizing 2D antenna arrays with a large number of antenna elements, it enables to form orthogonal beams towards spatially separated User Equipments (UEs). In this paper we evaluate the channel energy and the average throughput of typical indoor- and outdoor UEs at various heights and distances to the macro site. Our goal is to demonstrate the achievable spatial resolution of the beamforming in vertical direction with large antenna arrays. Existing work on directional beamforming strategies is commonly based on simplistic signal propagation assumptions under LOS conditions. This paper considers a realistic 3D channel model that also accounts for multi-path propagation under NLOS conditions. Our results exhibit the dependency of the achievable spatial resolution on both, the size of the antenna array as well as the channel conditions. They show that depending on whether the UE is located indoors or outdoors, the channel has an opposing impact on the achievable spatial resolution.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Two-layer precoding for dimensionality reduction in massive MIMO.\n \n \n \n \n\n\n \n Arvola, A.; Tölli, A.; and Gesbert, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2000-2004, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Two-layerPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760599,\n  author = {A. Arvola and A. Tölli and D. Gesbert},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Two-layer precoding for dimensionality reduction in massive MIMO},\n  year = {2016},\n  pages = {2000-2004},\n  abstract = {Massive MIMO (multiple-input multiple-output) is a promising technology for the upcoming 5G as it provides significant beamforming gains and interference reduction capabilities due to the large number of antennas. However, massive MIMO is computationally demanding, as the high antenna count results in high-dimensional matrix operations when conventional MIMO processing is applied. In this paper, we focus on two-stage digital beamforming, where the beamformer is split into a slow-varying statistics-based outer beamformer and an inner beamformer accounting for fast channel variations. We formulate two two-stage precoding optimization problems: weighted sum-rate maximization and minimum user rate maximization for a single-cell downlink system. We also provide different heuristic methods of forming the outer precoder matrix via user channel covariance, while the inner precoder is obtained as a result for the optimization problem. Unlike most previous work, which consider the outer precoder design based on energy maximization and user group location, our aim is to design it to offer a tradeoff between energy maximization and interference reduction, and also take into account the fairness between users. We evaluate the performance of the different heuristic methods as a function of the number of statistical pre-beams and a fixed user angular spread to see the overall effect of complexity reduction on the system sum-rate and minimum user rate. We also evaluate the advantages of different methods in terms of user fairness.},\n  keywords = {5G mobile communication;antenna arrays;array signal processing;covariance matrices;interference suppression;MIMO communication;optimisation;precoding;radiofrequency interference;statistical analysis;wireless channels;dimensionality reduction;massive MIMO network;multiple input multiple output system;5G system;interference reduction capability;antenna;high-dimensional matrix operation;two-stage digital beamforming;channel variation;two-stage precoding optimization problem;weighted sum-rate maximization;minimum user rate maximization;single-cell downlink system;heuristic method;user channel covariance;optimization problem;energy maximization;user group location;statistical prebeam;complexity reduction effect;MIMO;Interference;Covariance matrices;Array signal processing;Precoding;Antennas;Signal to noise ratio},\n  doi = {10.1109/EUSIPCO.2016.7760599},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256528.pdf},\n}\n\n
\n
\n\n\n
\n Massive MIMO (multiple-input multiple-output) is a promising technology for the upcoming 5G as it provides significant beamforming gains and interference reduction capabilities due to the large number of antennas. However, massive MIMO is computationally demanding, as the high antenna count results in high-dimensional matrix operations when conventional MIMO processing is applied. In this paper, we focus on two-stage digital beamforming, where the beamformer is split into a slow-varying statistics-based outer beamformer and an inner beamformer accounting for fast channel variations. We formulate two two-stage precoding optimization problems: weighted sum-rate maximization and minimum user rate maximization for a single-cell downlink system. We also provide different heuristic methods of forming the outer precoder matrix via user channel covariance, while the inner precoder is obtained as a result for the optimization problem. Unlike most previous work, which consider the outer precoder design based on energy maximization and user group location, our aim is to design it to offer a tradeoff between energy maximization and interference reduction, and also take into account the fairness between users. We evaluate the performance of the different heuristic methods as a function of the number of statistical pre-beams and a fixed user angular spread to see the overall effect of complexity reduction on the system sum-rate and minimum user rate. We also evaluate the advantages of different methods in terms of user fairness.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Impact of noisy annotators' reliability in a crowdsourcing system performance.\n \n \n \n \n\n\n \n Cabrera-Bean, M.; Díaz-Vilor, C.; and Vidal, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2005-2009, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ImpactPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760600,\n  author = {M. Cabrera-Bean and C. Díaz-Vilor and J. Vidal},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Impact of noisy annotators' reliability in a crowdsourcing system performance},\n  year = {2016},\n  pages = {2005-2009},\n  abstract = {Crowdsourcing is a powerful tool to harness citizen assessments in some complex decision tasks. When multiple annotators provide their individual labels a more reliable collective decision is obtained if the individual reliability parameters are incorporated in the decision making procedure. The well-known Maximum A Posteriori (MAP) rule weights the individual labels in proportion to the annotators' reliability. In this work we analyze how the crowdsourcing system performance is degraded with the use of noisy annotators' reliability parameters and we derive an alternative MAP based rule to be applied when these parameters are neither known nor even estimated by the decision system. We also derive analytical expected error rates and their upper bounds obtained by each rule as a useful tool to estimate the number of necessary annotators in the collective decision system depending on the level of noise present in the estimated reliability parameters.},\n  keywords = {decision making;maximum likelihood estimation;object detection;reliability;noisy annotator reliability impact;crowdsourcing system performance;harness citizen assessment;decision making procedure;maximum a posteriori rule;MAP based rule;decision system;analytical expected error rate;Reliability;Crowdsourcing;Upper bound;Error analysis;Noise measurement;Diseases;Europe;Crowdsource;Expected error rate bound;Specificity;Sensitivity},\n  doi = {10.1109/EUSIPCO.2016.7760600},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255853.pdf},\n}\n\n
\n
\n\n\n
\n Crowdsourcing is a powerful tool to harness citizen assessments in some complex decision tasks. When multiple annotators provide their individual labels a more reliable collective decision is obtained if the individual reliability parameters are incorporated in the decision making procedure. The well-known Maximum A Posteriori (MAP) rule weights the individual labels in proportion to the annotators' reliability. In this work we analyze how the crowdsourcing system performance is degraded with the use of noisy annotators' reliability parameters and we derive an alternative MAP based rule to be applied when these parameters are neither known nor even estimated by the decision system. We also derive analytical expected error rates and their upper bounds obtained by each rule as a useful tool to estimate the number of necessary annotators in the collective decision system depending on the level of noise present in the estimated reliability parameters.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effects of matrix completion on the classification of undersampled human activity data streams.\n \n \n \n \n\n\n \n Savvaki, S.; Tsagkatakis, G.; Panousopoulou, A.; and Tsakalides, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2010-2014, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"EffectsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760601,\n  author = {S. Savvaki and G. Tsagkatakis and A. Panousopoulou and P. Tsakalides},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Effects of matrix completion on the classification of undersampled human activity data streams},\n  year = {2016},\n  pages = {2010-2014},\n  abstract = {Classification of activities of daily living is of paramount importance in modern healthcare applications. However, hardware monitoring constraints lead frequently to missing raw values, dramatically affecting the performance of machine learning algorithms. In this work, we study the problem of efficient estimation of missing linear acceleration and angular velocity measurements, experimenting on a public Human Activity Recognition (HAR) dataset. We exploit the data correlation to formulate the problem as an instance of low-rank Matrix Completion (MC) within a general classification framework. We consider the effects of our proposed reconstruction method on the classification accuracy as related to the size of the training and test sets, and the single versus collective recovery. Additionally, we compare the performance of our approach with popular imputation and expectation maximization algorithms for treating missing measurements, in conjunction with several state-of-the-art classifiers. The results highlight that robust and efficient classification is feasible even with a substantially reduced amount of measurements.},\n  keywords = {data handling;estimation theory;matrix algebra;pattern classification;matrix completion;MC;human activity recognition;HAR dataset classification;linear acceleration estimation;angular velocity measurement estimation;Training;Feature extraction;Europe;Signal processing;Predictive models;Size measurement},\n  doi = {10.1109/EUSIPCO.2016.7760601},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256091.pdf},\n}\n\n
\n
\n\n\n
\n Classification of activities of daily living is of paramount importance in modern healthcare applications. However, hardware monitoring constraints lead frequently to missing raw values, dramatically affecting the performance of machine learning algorithms. In this work, we study the problem of efficient estimation of missing linear acceleration and angular velocity measurements, experimenting on a public Human Activity Recognition (HAR) dataset. We exploit the data correlation to formulate the problem as an instance of low-rank Matrix Completion (MC) within a general classification framework. We consider the effects of our proposed reconstruction method on the classification accuracy as related to the size of the training and test sets, and the single versus collective recovery. Additionally, we compare the performance of our approach with popular imputation and expectation maximization algorithms for treating missing measurements, in conjunction with several state-of-the-art classifiers. The results highlight that robust and efficient classification is feasible even with a substantially reduced amount of measurements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A tensor-based method for large-scale blind system identification using segmentation.\n \n \n \n \n\n\n \n Boussé, M.; Debals, O.; and De Lathauwer, L.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2015-2019, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760602,\n  author = {M. Boussé and O. Debals and L. {De Lathauwer}},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A tensor-based method for large-scale blind system identification using segmentation},\n  year = {2016},\n  pages = {2015-2019},\n  abstract = {A new method for the blind identification of large-scale finite impulse response (FIR) systems is presented. It exploits the fact that the system coefficients in large-scale problems often depend on much fewer parameters than the total number of entries in the coefficient vectors. We use low-rank models to compactly represent matricized versions of these compressible system coefficients. We show that blind system identification (BSI) then reduces to the computation of a structured tensor decomposition by using a deterministic tensorization technique called segmentation on the observed outputs. This careful exploitation of the low-rank structure enables the unique identification of both the system coefficients and the inputs. The approach does not require the input signals to be statistically independent.},\n  keywords = {blind source separation;tensor-based method;large-scale blind system identification;finite impulse response;tensorization technique;Tensile stress;Matrix decomposition;Signal processing;Computational modeling;Europe;Electrical engineering;Electronic mail},\n  doi = {10.1109/EUSIPCO.2016.7760602},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256223.pdf},\n}\n\n
\n
\n\n\n
\n A new method for the blind identification of large-scale finite impulse response (FIR) systems is presented. It exploits the fact that the system coefficients in large-scale problems often depend on much fewer parameters than the total number of entries in the coefficient vectors. We use low-rank models to compactly represent matricized versions of these compressible system coefficients. We show that blind system identification (BSI) then reduces to the computation of a structured tensor decomposition by using a deterministic tensorization technique called segmentation on the observed outputs. This careful exploitation of the low-rank structure enables the unique identification of both the system coefficients and the inputs. The approach does not require the input signals to be statistically independent.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hyperspectral image clustering using a novel efficient online possibilistic algorithm.\n \n \n \n \n\n\n \n Xenaki, S. D.; Koutroumbas, K. D.; and Rontogiannis, A. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2020-2024, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"HyperspectralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760603,\n  author = {S. D. Xenaki and K. D. Koutroumbas and A. A. Rontogiannis},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Hyperspectral image clustering using a novel efficient online possibilistic algorithm},\n  year = {2016},\n  pages = {2020-2024},\n  abstract = {In this paper a novel efficient online possibilistic clustering algorithm suitable for hyperspectral image clustering is proposed. The algorithm is an online version of the recently proposed adaptive possibilistic c-means (APCM) algorithm and inherits its basic advantage, that is the ability to adapt the involved parameters during its execution in order to track variations during the clustering formation. In addition, it embodies new procedures for creating new clusters or merging existing ones. The proposed algorithm is much more computationally efficient, compared to its batch ancestor with no degradation of the quality of the resulting clustering, while also, it compares favourably with other online related algorithms. Experimental results on a real data set corroborate the effectiveness of the proposed method.},\n  keywords = {hyperspectral imaging;image processing;pattern clustering;hyperspectral image clustering;online possibilistic clustering algorithm;adaptive possibilistic c-mean algorithm;APCM algorithm;Clustering algorithms;Signal processing algorithms;Hyperspectral imaging;Europe;Signal processing;Merging;possibilistic clustering;online clustering;parameter adaptivity;hyperspectral imaging},\n  doi = {10.1109/EUSIPCO.2016.7760603},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256229.pdf},\n}\n\n
\n
\n\n\n
\n In this paper a novel efficient online possibilistic clustering algorithm suitable for hyperspectral image clustering is proposed. The algorithm is an online version of the recently proposed adaptive possibilistic c-means (APCM) algorithm and inherits its basic advantage, that is the ability to adapt the involved parameters during its execution in order to track variations during the clustering formation. In addition, it embodies new procedures for creating new clusters or merging existing ones. The proposed algorithm is much more computationally efficient, compared to its batch ancestor with no degradation of the quality of the resulting clustering, while also, it compares favourably with other online related algorithms. Experimental results on a real data set corroborate the effectiveness of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Classification of multiple annotator data using variational Gaussian process inference.\n \n \n \n \n\n\n \n Besler, E.; Ruiz, P.; Molina, R.; and Katsaggelos, A. K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2025-2029, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ClassificationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760604,\n  author = {E. Besler and P. Ruiz and R. Molina and A. K. Katsaggelos},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Classification of multiple annotator data using variational Gaussian process inference},\n  year = {2016},\n  pages = {2025-2029},\n  abstract = {In this paper we address supervised learning problems where, instead of having a single annotator who provides the ground truth, multiple annotators, usually with varying degrees of expertise, provide conflicting labels for the same sample. Once Gaussian Process classification has been adapted to this problem we propose and describe how Variational Bayes inference can be used to, given the observed labels, approximate the posterior distribution of the latent classifier and also estimate each annotator's reliability. In the experimental section, we evaluate the proposed method on both generated synthetic and real data, and compare it with state of the art crowd-sourcing methods.},\n  keywords = {Bayes methods;Gaussian processes;inference mechanisms;learning (artificial intelligence);pattern classification;software reliability;multiple annotator data classification;variational Gaussian process inference;supervised learning problems;ground truth;posterior distribution;latent classifier;annotator reliability estimation;Crowdsourcing;Gaussian processes;Solid modeling;Europe;Signal processing;Supervised learning;Bayes methods;crowdsourcing;Gaussian process;multiple labels;variational inference;Bayesian modeling;classification},\n  doi = {10.1109/EUSIPCO.2016.7760604},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256378.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we address supervised learning problems where, instead of having a single annotator who provides the ground truth, multiple annotators, usually with varying degrees of expertise, provide conflicting labels for the same sample. Once Gaussian Process classification has been adapted to this problem we propose and describe how Variational Bayes inference can be used to, given the observed labels, approximate the posterior distribution of the latent classifier and also estimate each annotator's reliability. In the experimental section, we evaluate the proposed method on both generated synthetic and real data, and compare it with state of the art crowd-sourcing methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Segment-level pyramid match kernels for the classification of varying length patterns of speech using SVMs.\n \n \n \n \n\n\n \n Gupta, S.; Dileep, A. D.; and Thenkanidiyoor, V.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2030-2034, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Segment-levelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760605,\n  author = {S. Gupta and A. D. Dileep and V. Thenkanidiyoor},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Segment-level pyramid match kernels for the classification of varying length patterns of speech using SVMs},\n  year = {2016},\n  pages = {2030-2034},\n  abstract = {Classification of long duration speech, represented as varying length sets of feature vectors using support vector machine (SVM) requires a suitable kernel. In this paper we propose a novel segment-level pyramid match kernel (SLPMK) for the classification of varying length patterns of long duration speech represented as sets of feature vectors. This kernel is designed by partitioning the speech signal into increasingly finer segments and matching the corresponding segments. We study the performance of the SVM-based classifiers using the proposed SLPMKs for speech emotion recognition and speaker identification and compare with that of the SVM-based classifiers using other dynamic kernels.},\n  keywords = {emotion recognition;signal classification;speaker recognition;support vector machines;speaker identification;speech emotion recognition;SVM-based classifier;speech signal partitioning;feature vector set;SLPMK;segment-level pyramid match kernel;support vector machine;speech varying length pattern classification;Kernel;Speech;Speech recognition;Image segmentation;Speech coding;Support vector machines;Emotion recognition},\n  doi = {10.1109/EUSIPCO.2016.7760605},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252157.pdf},\n}\n\n
\n
\n\n\n
\n Classification of long duration speech, represented as varying length sets of feature vectors using support vector machine (SVM) requires a suitable kernel. In this paper we propose a novel segment-level pyramid match kernel (SLPMK) for the classification of varying length patterns of long duration speech represented as sets of feature vectors. This kernel is designed by partitioning the speech signal into increasingly finer segments and matching the corresponding segments. We study the performance of the SVM-based classifiers using the proposed SLPMKs for speech emotion recognition and speaker identification and compare with that of the SVM-based classifiers using other dynamic kernels.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n fMRI time-series clustering using a mixture of mixtures of Student's-t and Rayleigh distributions.\n \n \n \n \n\n\n \n Xie, Q.; Zhang, Z.; Pan, X.; and Zhu, H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2035-2039, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"fMRIPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760606,\n  author = {Q. Xie and Z. Zhang and X. Pan and H. Zhu},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {fMRI time-series clustering using a mixture of mixtures of Student's-t and Rayleigh distributions},\n  year = {2016},\n  pages = {2035-2039},\n  abstract = {In this paper, a new Markov random field-based mixture model, where each of its components is a mixture of Student's-t and Rayleigh distributions, is proposed for clustering fMRI time-series. By introducing the non-symmetric Rayleigh distribution, the proposed algorithm has flexibility to fit various types of observed time-series. Moreover, our method incorporates Markov random field so that the spatial relationships between neighboring voxels are considered, which makes the presented model more robust to noise, and that preserves more details of the clustering results compared with other symmetric distribution-based algorithms. Additionally, the expectation maximization algorithm is directly implemented to estimate the parameter set by maximizing the data log-likelihood function. The proposed framework is evaluated on real fMRI time-series, and the quantitatively compared results are demonstrated in terms of effectiveness and accuracy.},\n  keywords = {expectation-maximisation algorithm;image processing;magnetic resonance imaging;Markov processes;mixture models;pattern clustering;statistical distributions;time series;fMRI time-series clustering;Student-t distribution mixture;Rayleigh distribution mixture;Markov random field-based mixture model;nonsymmetric Rayleigh distribution;data log-likelihood function maximization;parameter set estimation;expectation maximization algorithm;Mixture models;Signal processing algorithms;Clustering algorithms;Linear programming;Europe;Signal processing;Markov processes},\n  doi = {10.1109/EUSIPCO.2016.7760606},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570246266.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a new Markov random field-based mixture model, where each of its components is a mixture of Student's-t and Rayleigh distributions, is proposed for clustering fMRI time-series. By introducing the non-symmetric Rayleigh distribution, the proposed algorithm has flexibility to fit various types of observed time-series. Moreover, our method incorporates Markov random field so that the spatial relationships between neighboring voxels are considered, which makes the presented model more robust to noise, and that preserves more details of the clustering results compared with other symmetric distribution-based algorithms. Additionally, the expectation maximization algorithm is directly implemented to estimate the parameter set by maximizing the data log-likelihood function. The proposed framework is evaluated on real fMRI time-series, and the quantitatively compared results are demonstrated in terms of effectiveness and accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection of neovascularization near the optic disk due to diabetic retinopathy.\n \n \n \n \n\n\n \n Coelho, D. F. G.; Rangayyan, R. M.; and Dimitrov, V. S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2040-2044, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DetectionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760607,\n  author = {D. F. G. Coelho and R. M. Rangayyan and V. S. Dimitrov},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Detection of neovascularization near the optic disk due to diabetic retinopathy},\n  year = {2016},\n  pages = {2040-2044},\n  abstract = {We propose a technique for detection of neovascularization near the optic disk due to diabetic retinopathy. Images of the retinal fundus are analyzed using a measure of angular spread of the Fourier power spectrum of the gradient magnitude of the original images using the horizontal and vertical Prewitt operators. The entropy of the angular spread of the Fourier power spectrum and spatial variance are adopted to distinguish normal optic disks from those affected by neovascularization. The two-sided Kolmogorov-Smirnov nonparametric test is used to evaluate the significance of the difference of entropy between normal and abnormal optic disks. Based on the computed measures, we employ a linear classifier to discriminate normal from abnormal optic disks. The proposed method was able to classify a small set of five normal and five neovascularization cases with 100% accuracy.},\n  keywords = {biomedical optical imaging;diseases;entropy;eye;Fourier analysis;neovascularization detection;optic disk;diabetic retinopathy;retinal fundus image analysis;angular spread;Fourier power spectrum;gradient magnitude;horizontal Prewitt operators;vertical Prewitt operators;entropy;two-sided Kolmogorov-Smirnov nonparametric test;Diabetes;Retinopathy;Optical imaging;Optical signal processing;Retina;Entropy;Biomedical optical imaging;Retinal Image Analysis;Fourier Spectral Analysis;Neovascularization},\n  doi = {10.1109/EUSIPCO.2016.7760607},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570247926.pdf},\n}\n\n
\n
\n\n\n
\n We propose a technique for detection of neovascularization near the optic disk due to diabetic retinopathy. Images of the retinal fundus are analyzed using a measure of angular spread of the Fourier power spectrum of the gradient magnitude of the original images using the horizontal and vertical Prewitt operators. The entropy of the angular spread of the Fourier power spectrum and spatial variance are adopted to distinguish normal optic disks from those affected by neovascularization. The two-sided Kolmogorov-Smirnov nonparametric test is used to evaluate the significance of the difference of entropy between normal and abnormal optic disks. Based on the computed measures, we employ a linear classifier to discriminate normal from abnormal optic disks. The proposed method was able to classify a small set of five normal and five neovascularization cases with 100% accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Combined matching pursuit and Wigner-Ville Distribution analysis for the discrimination of ictal heart rate variability.\n \n \n \n \n\n\n \n Qaraqe, M.; Ismail, M.; and Serpedin, E.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2045-2049, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CombinedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760608,\n  author = {M. Qaraqe and M. Ismail and E. Serpedin},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Combined matching pursuit and Wigner-Ville Distribution analysis for the discrimination of ictal heart rate variability},\n  year = {2016},\n  pages = {2045-2049},\n  abstract = {This paper presents a novel method for the discrimination of ictal heart rate variability (HRV). Traditionally, the analysis of the non-linear and non-stationary electrocardiogram (ECG) signal is limited to the time-domain or frequency-domain. This severely limits the quality of features that can be extracted from the ECG signal. In this work, HRV extracted from ECG is analyzed by combining the Matching-Pursuit (MP) and Wigner-Ville Distribution (WVD) algorithms in order to obtain a high quality time-frequency distribution of the HRV signal and to effectively extract meaningful HRV features representative of seizure and non-seizure states. The proposed method is tested on clinical patients and the results demonstrate effective discrimination between ictal HRV features and non-ictal HRV features.},\n  keywords = {electrocardiography;feature extraction;iterative methods;medical disorders;medical signal processing;patient diagnosis;time-frequency analysis;Wigner distribution;matching pursuit;Wigner-Ville distribution analysis;WVD algorithms;ictal heart rate variability;HRV signal;electrocardiogram signal;ECG signal;time-domain;frequency-domain;time-frequency distribution;nonseizure states;clinical patients;nonictal HRV features;Heart rate variability;Electrocardiography;Feature extraction;Time-frequency analysis;Epilepsy;Signal processing algorithms;Interference},\n  doi = {10.1109/EUSIPCO.2016.7760608},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250900.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a novel method for the discrimination of ictal heart rate variability (HRV). Traditionally, the analysis of the non-linear and non-stationary electrocardiogram (ECG) signal is limited to the time-domain or frequency-domain. This severely limits the quality of features that can be extracted from the ECG signal. In this work, HRV extracted from ECG is analyzed by combining the Matching-Pursuit (MP) and Wigner-Ville Distribution (WVD) algorithms in order to obtain a high quality time-frequency distribution of the HRV signal and to effectively extract meaningful HRV features representative of seizure and non-seizure states. The proposed method is tested on clinical patients and the results demonstrate effective discrimination between ictal HRV features and non-ictal HRV features.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Histopathological image classification using random binary hashing based PCANet and bilinear classifier.\n \n \n \n \n\n\n \n Wu, J.; Shi, J.; Li, Y.; Suo, J.; and Zhang, Q.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2050-2054, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"HistopathologicalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760609,\n  author = {J. Wu and J. Shi and Y. Li and J. Suo and Q. Zhang},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Histopathological image classification using random binary hashing based PCANet and bilinear classifier},\n  year = {2016},\n  pages = {2050-2054},\n  abstract = {The computer-aided histopathological image diagnosis has attracted considerable attention. Principal component analysis network (PCANet) is a novel deep learning algorithm with a simple network architecture and parameters. In this work, we propose a random binary hashing (RBH) based PCANet (RBH-PCANet), which can generate multiple randomly encoded binary codes to provide more information. Moreover, we rearrange the local features derived from PCANet to the matrix-form features in order to reduce feature dimensionality, and then we apply the low-rank bilinear classifier (LRBC) to perform effective classification for matrix features. The proposed classification framework using RBH-PCANet and LRBC (RBH-PCANet-LRBC) is adopted for histopathological image classification. The experimental results on both a hepatocellular carcinoma image dataset and a breast cancer image dataset show that the RBH-PCANet-LRBC algorithm achieves best performance compared with other unsupervised deep learning algorithms.},\n  keywords = {cancer;feature extraction;image classification;image coding;learning (artificial intelligence);matrix algebra;medical image processing;principal component analysis;random processes;computer-aided histopathological image diagnosis;principal component analysis network;deep learning;random binary hashing based PCANet;multiple randomly encoded binary codes;local features;matrix-form features;feature dimensionality;low-rank bilinear classifier;matrix features;histopathological image classification;hepatocellular carcinoma image dataset;breast cancer image dataset;RBH-PCANet-LRBC;Classification algorithms;Signal processing algorithms;Principal component analysis;Histograms;Breast cancer;Europe;Signal processing;PCANet;Random Binary Hashing;Low Rank Bilinear Classifier;Histopathological Image},\n  doi = {10.1109/EUSIPCO.2016.7760609},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250914.pdf},\n}\n\n
\n
\n\n\n
\n The computer-aided histopathological image diagnosis has attracted considerable attention. Principal component analysis network (PCANet) is a novel deep learning algorithm with a simple network architecture and parameters. In this work, we propose a random binary hashing (RBH) based PCANet (RBH-PCANet), which can generate multiple randomly encoded binary codes to provide more information. Moreover, we rearrange the local features derived from PCANet to the matrix-form features in order to reduce feature dimensionality, and then we apply the low-rank bilinear classifier (LRBC) to perform effective classification for matrix features. The proposed classification framework using RBH-PCANet and LRBC (RBH-PCANet-LRBC) is adopted for histopathological image classification. The experimental results on both a hepatocellular carcinoma image dataset and a breast cancer image dataset show that the RBH-PCANet-LRBC algorithm achieves best performance compared with other unsupervised deep learning algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Glaucoma diagnosis by means of optic cup feature analysis in color fundus images.\n \n \n \n \n\n\n \n Diaz, A.; Morales, S.; Naranjo, V.; Alcocer, P.; and Lanzagorta, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2055-2059, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"GlaucomaPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760610,\n  author = {A. Diaz and S. Morales and V. Naranjo and P. Alcocer and A. Lanzagorta},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Glaucoma diagnosis by means of optic cup feature analysis in color fundus images},\n  year = {2016},\n  pages = {2055-2059},\n  abstract = {Glaucoma is an asymptomatic eye disease and one of the major causes of irreversible blindness worldwide. For this reason, there have been significant advances in automatic screening tools for early detection. In this paper, an automatic glaucoma diagnosis algorithm based on retinal fundus image is presented. This algorithm uses anatomical characteristics such as the position of the vessels and the cup within the optic nerve. Using several color spaces and the Stochastic Watershed transformation, different characteristics of the optic nerve were analyzed in order to distinguish between a normal and a glaucomatous fundus. The proposed algorithm was evaluated on 53 images (24 normal and 29 glaucomatous images). The specificity and sensitivity obtained by the proposed algorithm are 0.81 and 0.87 using Luv color space, which means considerable performance in diagnosis systems.},\n  keywords = {biomedical optical imaging;blood vessels;diseases;eye;feature extraction;image colour analysis;medical image processing;neurophysiology;stochastic processes;optic cup feature analysis;color fundus images;asymptomatic eye disease;irreversible blindness;automatic screening tools;automatic glaucoma diagnosis algorithm;retinal fundus image;anatomical characteristics;vessel position;optic nerve;color spaces;stochastic watershed transformation;glaucomatous fundus;Luv color space;Optical imaging;Image color analysis;Optical signal processing;Signal processing algorithms;Biomedical optical imaging;Image segmentation;Optical sensors},\n  doi = {10.1109/EUSIPCO.2016.7760610},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251540.pdf},\n}\n\n
\n
\n\n\n
\n Glaucoma is an asymptomatic eye disease and one of the major causes of irreversible blindness worldwide. For this reason, there have been significant advances in automatic screening tools for early detection. In this paper, an automatic glaucoma diagnosis algorithm based on retinal fundus image is presented. This algorithm uses anatomical characteristics such as the position of the vessels and the cup within the optic nerve. Using several color spaces and the Stochastic Watershed transformation, different characteristics of the optic nerve were analyzed in order to distinguish between a normal and a glaucomatous fundus. The proposed algorithm was evaluated on 53 images (24 normal and 29 glaucomatous images). The specificity and sensitivity obtained by the proposed algorithm are 0.81 and 0.87 using Luv color space, which means considerable performance in diagnosis systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fetal ECG subspace estimation based on cyclostationarity.\n \n \n \n \n\n\n \n Zhang, M.; Haritopoulos, M.; and Nandi, A. K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2060-2064, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"FetalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760611,\n  author = {M. Zhang and M. Haritopoulos and A. K. Nandi},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Fetal ECG subspace estimation based on cyclostationarity},\n  year = {2016},\n  pages = {2060-2064},\n  abstract = {In this paper, we propose a strategy to estimate a Fetal Electrocardiogram (FECG) subspace from a set of mixed ECG recordings from the thoracic and abdominal electrodes attached on a pregnant woman. The ECGs from an expectant woman contain FECG that can provide valuable information for fetal health monitoring, such as the fetal heart rate (FHR). After applying blind source separation (BSS) methods to mixed ECG, independent components are obtained. The main purpose of this paper is to classify an FECG group from all of these components which can be classified as FECG, MECG and noise according to the features of signals. Inspired by the concept of multidimensional independent component analysis (MICA) and to automate the classification task, we propose a procedure based on cyclostationarity of FECGs, in particular, an integrated Cyclic Coherence. The method is validated on real world DaISy dataset and the results are promising.},\n  keywords = {biomedical electrodes;blind source separation;electrocardiography;feature extraction;independent component analysis;medical signal processing;obstetrics;patient monitoring;signal classification;signal denoising;fetal ECG subspace estimation;cyclostationarity;fetal electrocardiogram subspace;mixed ECG recordings;thoracic electrodes;abdominal electrodes;pregnant woman;expectant woman;fetal health monitoring;fetal heart rate;blind source separation;FECG group classification;noise;signal features;multidimensional independent component analysis;classification task;integrated cyclic coherence;real world DaISy dataset;Electrocardiography;Coherence;Blind source separation;Electrodes;Integrated circuits;Wavelet transforms;Fetal Electrocardiogram;Fetal Heart Rate;Cyclostationarity;Cyclic Coherence;Blind Source Separation;Multidimensional Independent Component Analysis},\n  doi = {10.1109/EUSIPCO.2016.7760611},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251594.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a strategy to estimate a Fetal Electrocardiogram (FECG) subspace from a set of mixed ECG recordings from the thoracic and abdominal electrodes attached on a pregnant woman. The ECGs from an expectant woman contain FECG that can provide valuable information for fetal health monitoring, such as the fetal heart rate (FHR). After applying blind source separation (BSS) methods to mixed ECG, independent components are obtained. The main purpose of this paper is to classify an FECG group from all of these components which can be classified as FECG, MECG and noise according to the features of signals. Inspired by the concept of multidimensional independent component analysis (MICA) and to automate the classification task, we propose a procedure based on cyclostationarity of FECGs, in particular, an integrated Cyclic Coherence. The method is validated on real world DaISy dataset and the results are promising.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n MIMO radar waveform design for transmit beampattern synthesis.\n \n \n \n \n\n\n \n Wang, S.; Huang, S.; and He, Z.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2065-2069, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MIMOPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760612,\n  author = {S. Wang and S. Huang and Z. He},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {MIMO radar waveform design for transmit beampattern synthesis},\n  year = {2016},\n  pages = {2065-2069},\n  abstract = {In this paper, the waveform design for transmit beampattern synthesis is studied in MIMO radar systems with colocated antennas. The waveform transmitted at each antenna is defined as a weighted sum of a set of discrete prolate spheroidal (DPS) sequences which have good orthogonal and band-limited properties. Assume that different transmit antennas use the same set of DPS sequences, while the weighting factors are variable which allows the correlation between different waveforms to be flexible and varied with the weighting factors. Optimum waveforms are designed to achieve a desired transmit beampattern under the constraint of a fixed total transmit energy. Unlike a traditional process, in which the waveform covariance matrix is designed in the first step and then the optimal waveforms are synthesised in the second step based on the designed waveform covariance matrix, it is shown in this work that the waveforms constructed by the DPS sequences need only one step to design the optimum waveforms. In this paper, we propose a new waveform design method for transmit beampattern synthesis which can efficiently match a desired transmit beampattern and control the power distributed at each beam in beampattern simultaneously. The choice of the number of the DPS sequences is also analyzed. Numerical simulations are provided to compare the performance of proposed method with that of the traditional methods.},\n  keywords = {antenna radiation patterns;covariance matrices;MIMO radar;radar antennas;radar signal processing;transmitting antennas;MIMO radar waveform design;transmit beampattern synthesis;colocated antennas;discrete prolate spheroidal sequence;DPS sequence;orthogonal properties;band-limited properties;transmit antennas;fixed total transmit energy constraint;waveform covariance matrix;numerical simulation;Radar antennas;MIMO radar;Optimization;Covariance matrices;Transmitting antennas;Design methodology;Multiple-input multiple-output (MIMO) radar;waveform design;DPS sequences;transmit beampattern},\n  doi = {10.1109/EUSIPCO.2016.7760612},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250796.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, the waveform design for transmit beampattern synthesis is studied in MIMO radar systems with colocated antennas. The waveform transmitted at each antenna is defined as a weighted sum of a set of discrete prolate spheroidal (DPS) sequences which have good orthogonal and band-limited properties. Assume that different transmit antennas use the same set of DPS sequences, while the weighting factors are variable which allows the correlation between different waveforms to be flexible and varied with the weighting factors. Optimum waveforms are designed to achieve a desired transmit beampattern under the constraint of a fixed total transmit energy. Unlike a traditional process, in which the waveform covariance matrix is designed in the first step and then the optimal waveforms are synthesised in the second step based on the designed waveform covariance matrix, it is shown in this work that the waveforms constructed by the DPS sequences need only one step to design the optimum waveforms. In this paper, we propose a new waveform design method for transmit beampattern synthesis which can efficiently match a desired transmit beampattern and control the power distributed at each beam in beampattern simultaneously. The choice of the number of the DPS sequences is also analyzed. Numerical simulations are provided to compare the performance of proposed method with that of the traditional methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed PLKF using delayed information sharing for 3D AOA target tracking.\n \n \n \n \n\n\n \n Xu, S.; Doğançay, K.; and Hmam, H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2070-2074, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DistributedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760613,\n  author = {S. Xu and K. Doğançay and H. Hmam},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Distributed PLKF using delayed information sharing for 3D AOA target tracking},\n  year = {2016},\n  pages = {2070-2074},\n  abstract = {This paper investigates the problem of distributed angle-of-arrival (AOA) target tracking in 3D space using unmanned aerial vehicles (UAVs). Because of communication constraints arising from distance and bandwidth constraints in a distributed UAV system, some UAVs may not be able to share their information with all other UAVs. This will lead to reduced tracking performance. In order to improve the estimation performance, a 3D distributed pseudolinear Kalman filter (DPLKF) using delayed information through intermediate UAVs is proposed. To track a moving target, a new estimation method using 1-step delayed information is developed which has low computational complexity. The communication topology with delayed information sharing is analyzed. In order to reduce communication traffic, a direct neighbors selection strategy is proposed. The effectiveness of the proposed estimation strategy is demonstrated with simulation examples.},\n  keywords = {autonomous aerial vehicles;computational complexity;direction-of-arrival estimation;Kalman filters;target tracking;distributed angle-of-arrival;unmanned aerial vehicles;distributed UAV system;3D distributed pseudolinear Kalman filter;computational complexity;3D AOA target tracking;delayed information sharing;distributed PLKF;Estimation;Target tracking;Three-dimensional displays;Topology;Information processing;Australia;Optimization},\n  doi = {10.1109/EUSIPCO.2016.7760613},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251856.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates the problem of distributed angle-of-arrival (AOA) target tracking in 3D space using unmanned aerial vehicles (UAVs). Because of communication constraints arising from distance and bandwidth constraints in a distributed UAV system, some UAVs may not be able to share their information with all other UAVs. This will lead to reduced tracking performance. In order to improve the estimation performance, a 3D distributed pseudolinear Kalman filter (DPLKF) using delayed information through intermediate UAVs is proposed. To track a moving target, a new estimation method using 1-step delayed information is developed which has low computational complexity. The communication topology with delayed information sharing is analyzed. In order to reduce communication traffic, a direct neighbors selection strategy is proposed. The effectiveness of the proposed estimation strategy is demonstrated with simulation examples.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fall motion detection using combined range and Doppler features.\n \n \n \n \n\n\n \n Erol, B.; and Amin, M. G.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2075-2080, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"FallPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760614,\n  author = {B. Erol and M. G. Amin},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Fall motion detection using combined range and Doppler features},\n  year = {2016},\n  pages = {2075-2080},\n  abstract = {Feature selection based on combined Doppler and range information improves fall detection and enables better discrimination against similar high Doppler non-rhythmic motions, such as sitting. A fall is typically characterized by an extension in range beyond that associated with sitting, which is determined by the seat horizontal depth. In this paper, we demonstrate, using time-frequency (TF) spectrograms, that range-Doppler radar plays a fundamental and important role in motion classification for assisted living applications. It reduces false alarms along with the associated cost in the unnecessary deployment of the first responders. This reduction is considered vital for the development of in-home radar monitoring and for casting it as a viable technology for aging-in-place.},\n  keywords = {assisted living;Doppler radar;feature extraction;geriatrics;motion compensation;time-frequency analysis;fall motion detection;Doppler features;feature selection;Doppler nonrhythmic motions;seat horizontal depth;time-frequency spectrograms;range-Doppler radar;motion classification;assisted living applications;in-home radar monitoring;aging-in-place;Doppler effect;Doppler radar;Feature extraction;Time-frequency analysis;Radar detection;Radar cross-sections},\n  doi = {10.1109/EUSIPCO.2016.7760614},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252332.pdf},\n}\n\n
\n
\n\n\n
\n Feature selection based on combined Doppler and range information improves fall detection and enables better discrimination against similar high Doppler non-rhythmic motions, such as sitting. A fall is typically characterized by an extension in range beyond that associated with sitting, which is determined by the seat horizontal depth. In this paper, we demonstrate, using time-frequency (TF) spectrograms, that range-Doppler radar plays a fundamental and important role in motion classification for assisted living applications. It reduces false alarms along with the associated cost in the unnecessary deployment of the first responders. This reduction is considered vital for the development of in-home radar monitoring and for casting it as a viable technology for aging-in-place.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Ship detection using SAR and AIS raw data for maritime surveillance.\n \n \n \n \n\n\n \n Vieira, F. M.; Vincent, F.; Tourneret, J.; Bonacci, D.; Spigai, M.; Ansart, M.; and Richard, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2081-2085, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ShipPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760615,\n  author = {F. M. Vieira and F. Vincent and J. Tourneret and D. Bonacci and M. Spigai and M. Ansart and J. Richard},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Ship detection using SAR and AIS raw data for maritime surveillance},\n  year = {2016},\n  pages = {2081-2085},\n  abstract = {This paper studies a maritime vessel detection method based on the fusion of data obtained from two different sensors, namely a synthetic aperture radar (SAR) and an automatic identification system (AIS) embedded in a satellite. Contrary to most methods widely used in the literature, the present work proposes to jointly exploit information from SAR and AIS raw data in order to detect the absence or presence of a ship using a binary hypothesis testing problem. This detection problem is handled by a generalized likelihood ratio detector whose test statistics has a simple closed form expression. The distribution of the test statistics is derived under both hypotheses, allowing the corresponding receiver operational characteristics (ROCs) to be computed. The ROCs are then used to compare the detection performance obtained with different sensors showing the interest of combining information from AIS and radar.},\n  keywords = {marine radar;radar detection;search radar;sensor fusion;ships;synthetic aperture radar;ship detection;SAR raw data;AIS raw data;maritime surveillance;maritime vessel detection method;data fusion;synthetic aperture radar;automatic identification system;binary hypothesis testing problem;generalized likelihood ratio detector;test statistics;receiver operational characteristics;Artificial intelligence;Marine vehicles;Synthetic aperture radar;Satellites;Detectors;multi-sensor fusion;detection;automatic identification system (AIS);synthetic aperture radar (SAR);maritime surveillance},\n  doi = {10.1109/EUSIPCO.2016.7760615},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252342.pdf},\n}\n\n
\n
\n\n\n
\n This paper studies a maritime vessel detection method based on the fusion of data obtained from two different sensors, namely a synthetic aperture radar (SAR) and an automatic identification system (AIS) embedded in a satellite. Contrary to most methods widely used in the literature, the present work proposes to jointly exploit information from SAR and AIS raw data in order to detect the absence or presence of a ship using a binary hypothesis testing problem. This detection problem is handled by a generalized likelihood ratio detector whose test statistics has a simple closed form expression. The distribution of the test statistics is derived under both hypotheses, allowing the corresponding receiver operational characteristics (ROCs) to be computed. The ROCs are then used to compare the detection performance obtained with different sensors showing the interest of combining information from AIS and radar.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-complexity weighted pseudolinear estimator for TDOA localization with systematic error correction.\n \n \n \n \n\n\n \n Doğançay, K.; and Nguyen, N. H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2086-2090, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Low-complexityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760616,\n  author = {K. Doğançay and N. H. Nguyen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Low-complexity weighted pseudolinear estimator for TDOA localization with systematic error correction},\n  year = {2016},\n  pages = {2086-2090},\n  abstract = {A closed-form pseudolinear estimation algorithm for time-difference-of-arrival (TDOA) emitter localization was previously proposed by replacing TDOA hyperbolae with their asymptotes, effectively transforming the TDOA localization problem into a bearings-only localization problem. Despite having a good mean-square error (MSE) performance and low computational complexity, this algorithm was observed to suffer from systematic errors arising from the large misalignment between TDOA hyperbolae and asymptotes particularly in the near field and for poor geometries, resulting in increased estimation bias. To address the bias problem, this paper presents a systematic error correction technique based on a two-stage estimation process in which the estimation errors due to asymptote misalignment are computed from an initial location estimate and then subsequently corrected. The superior performance of the new algorithm compared with the maximum likelihood estimator is demonstrated with simulation examples.},\n  keywords = {error correction;estimation theory;mean square error methods;time-of-arrival estimation;asymptote misalignment;two-stage estimation process;low computational complexity;mean-square error performance;closed-form pseudolinear estimation algorithm;systematic error correction technique;time-difference-of-arrival emitter localization;TDOA localization;low-complexity weighted pseudolinear estimator;Systematics;Maximum likelihood estimation;Error correction;Europe;Geometry},\n  doi = {10.1109/EUSIPCO.2016.7760616},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252363.pdf},\n}\n\n
\n
\n\n\n
\n A closed-form pseudolinear estimation algorithm for time-difference-of-arrival (TDOA) emitter localization was previously proposed by replacing TDOA hyperbolae with their asymptotes, effectively transforming the TDOA localization problem into a bearings-only localization problem. Despite having a good mean-square error (MSE) performance and low computational complexity, this algorithm was observed to suffer from systematic errors arising from the large misalignment between TDOA hyperbolae and asymptotes particularly in the near field and for poor geometries, resulting in increased estimation bias. To address the bias problem, this paper presents a systematic error correction technique based on a two-stage estimation process in which the estimation errors due to asymptote misalignment are computed from an initial location estimate and then subsequently corrected. The superior performance of the new algorithm compared with the maximum likelihood estimator is demonstrated with simulation examples.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Exploiting persymmetry for adaptive detection in distributed MIMO radar.\n \n \n \n \n\n\n \n Liu, J.; Li, H.; and Himed, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2091-2095, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ExploitingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760617,\n  author = {J. Liu and H. Li and B. Himed},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Exploiting persymmetry for adaptive detection in distributed MIMO radar},\n  year = {2016},\n  pages = {2091-2095},\n  abstract = {We consider the adaptive detection problem in colored Gaussian noise with unknown persymmetric covariance matrix in a multiple-input-multiple-output (MIMO) radar with spatially dispersed antennas. To this end, a set of secondary data for each transmit-receive pair is assumed to be available. MIMO versions of the persymmetric generalized likelihood ratio test (MIMO-PGLRT) detector and the persymmetric sampler matrix inversion (MIMO-PSMI) detector are proposed. Compared to the MIMO-PGLRT detector, the MIMO-PSMI detector has a simple form and is computationally more efficient. Numerical examples are provided to demonstrate that the proposed two detection algorithms can significantly alleviate the requirement of the amount of secondary data, and allow for a noticeable improvement in detection performance.},\n  keywords = {antenna arrays;covariance matrices;Gaussian noise;MIMO radar;multiple-input-multiple-output radar;distributed MIMO radar;adaptive detection problem;colored Gaussian noise;unknown persymmetric covariance matrix;spatially dispersed antennas;transmit-receive pair;persymmetric generalized likelihood ratio test detector;persymmetric sampler matrix inversion detector;Detectors;Covariance matrices;Radar antennas;MIMO radar;MIMO;Training data;Receiving antennas;Adaptive detection;multiple-input-multiple-output (MIMO) radar;persymmetry},\n  doi = {10.1109/EUSIPCO.2016.7760617},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252381.pdf},\n}\n\n
\n
\n\n\n
\n We consider the adaptive detection problem in colored Gaussian noise with unknown persymmetric covariance matrix in a multiple-input-multiple-output (MIMO) radar with spatially dispersed antennas. To this end, a set of secondary data for each transmit-receive pair is assumed to be available. MIMO versions of the persymmetric generalized likelihood ratio test (MIMO-PGLRT) detector and the persymmetric sampler matrix inversion (MIMO-PSMI) detector are proposed. Compared to the MIMO-PGLRT detector, the MIMO-PSMI detector has a simple form and is computationally more efficient. Numerical examples are provided to demonstrate that the proposed two detection algorithms can significantly alleviate the requirement of the amount of secondary data, and allow for a noticeable improvement in detection performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Conjugate priors for Gaussian emission plsa recommender systems.\n \n \n \n \n\n\n \n Adalbjörnsson, S.; Swärd, J.; Berg, M. Ö.; Vang Andersen, S.; and Jakobsson, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2096-2100, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ConjugatePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760618,\n  author = {S. Adalbjörnsson and J. Swärd and M. Ö. Berg and S. {Vang Andersen} and A. Jakobsson},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Conjugate priors for Gaussian emission plsa recommender systems},\n  year = {2016},\n  pages = {2096-2100},\n  abstract = {Collaborative filtering for recommender systems seeks to learn and predict user preferences for a collection of items by identifying similarities between users on the basis of their past interest or interaction with the items in question. In this work, we present a conjugate prior regularized extension of Hofmann's Gaussian emission probabilistic latent semantic analysis model, able to overcome the over-fitting problem restricting the performance of the earlier formulation. Furthermore, in experiments using the EachMovie and MovieLens data sets, it is shown that the proposed regularized model achieves significantly improved prediction accuracy of user preferences as compared to the latent semantic analysis model without priors.},\n  keywords = {collaborative filtering;recommender systems;Gaussian emission PLSA recommender system;collaborative filtering;user preference prediction;prior conjugation;Hofmann Gaussian emission probabilistic latent semantic analysis model;EachMovie dataset;MovieLens dataset;Signal processing algorithms;Recommender systems;Signal processing;Data models;Europe;Collaboration;Probabilistic logic;Recommender systems;collaborative filtering;probabilistic matrix factorization},\n  doi = {10.1109/EUSIPCO.2016.7760618},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252102.pdf},\n}\n\n
\n
\n\n\n
\n Collaborative filtering for recommender systems seeks to learn and predict user preferences for a collection of items by identifying similarities between users on the basis of their past interest or interaction with the items in question. In this work, we present a conjugate prior regularized extension of Hofmann's Gaussian emission probabilistic latent semantic analysis model, able to overcome the over-fitting problem restricting the performance of the earlier formulation. Furthermore, in experiments using the EachMovie and MovieLens data sets, it is shown that the proposed regularized model achieves significantly improved prediction accuracy of user preferences as compared to the latent semantic analysis model without priors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection and localization of objects in Passive Millimeter Wave Images.\n \n \n \n \n\n\n \n Tapia, S. L.; Molina, R.; and de la Blanca , N. P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2101-2105, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DetectionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760619,\n  author = {S. L. Tapia and R. Molina and N. P. {de la Blanca}},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Detection and localization of objects in Passive Millimeter Wave Images},\n  year = {2016},\n  pages = {2101-2105},\n  abstract = {Passive Millimeter Wave Images (PMMWI) can be used to detect and localize objects concealed under clothing. Unfortunately, the quality of the acquired images and the unknown position, shape, and size of the hidden objects render difficult this task. In this paper we propose a method that combines image processing and statistical machine learning techniques to solve this localization/detection problem. The proposed approach is used on an image database containing a broad variety of sizes, types, and localizations of hidden objects. Experiments are presented in terms of the true positive and false positive detection rates. Due to its high performance and low computational cost, the proposed method can be used in real time applications.},\n  keywords = {learning (artificial intelligence);millimetre wave imaging;object detection;image database;statistical machine learning techniques;image processing;PMMWI;passive millimeter wave images;object localization;object detection;Feature extraction;Training;Support vector machines;Europe;Databases;Radio frequency;Signal processing;Millimeter wave imaging;object detection;machine learning;image processing;security},\n  doi = {10.1109/EUSIPCO.2016.7760619},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252180.pdf},\n}\n\n
\n
\n\n\n
\n Passive Millimeter Wave Images (PMMWI) can be used to detect and localize objects concealed under clothing. Unfortunately, the quality of the acquired images and the unknown position, shape, and size of the hidden objects render difficult this task. In this paper we propose a method that combines image processing and statistical machine learning techniques to solve this localization/detection problem. The proposed approach is used on an image database containing a broad variety of sizes, types, and localizations of hidden objects. Experiments are presented in terms of the true positive and false positive detection rates. Due to its high performance and low computational cost, the proposed method can be used in real time applications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A fixed-point algorithm for estimating power means of positive definite matrices.\n \n \n \n \n\n\n \n Congedo, M.; Phlypo, R.; and Barachant, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2106-2110, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760620,\n  author = {M. Congedo and R. Phlypo and A. Barachant},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A fixed-point algorithm for estimating power means of positive definite matrices},\n  year = {2016},\n  pages = {2106-2110},\n  abstract = {The estimation of means of data points lying on the Riemannian manifold of symmetric positive-definite (SPD) matrices is of great utility in classification problems and is currently heavily studied. The power means of SPD matrices with exponent p in the interval [-1, 1] interpolate in between the Harmonic (p = -1) and the Arithmetic mean (p = 1), while the Geometric (Karcher) mean corresponds to their limit evaluated at 0. In this article we present a simple fixed point algorithm for estimating means along this whole continuum. The convergence rate of the proposed algorithm for p = ±0.5 deteriorates very little with the number and dimension of points given as input. Along the whole continuum it is also robust with respect to the dispersion of the points on the manifold. Thus, the proposed algorithm allows the efficient estimation of the whole family of power means, including the geometric mean.},\n  keywords = {matrix algebra;signal classification;Karcher mean;geometric mean;harmonic mean;arithmetic mean;classification problem;SPD matrices;symmetric positive-definite matrices;Riemannian manifold;data point mean estimation;power mean estimation;fixed-point algorithm;Manifolds;Convergence;Signal processing algorithms;Symmetric matrices;Signal processing;Covariance matrices;Europe;Power Mean;Geometric Mean;High Dimension;Riemannian Manifold;Symmetric Positive-Definite Matrix},\n  doi = {10.1109/EUSIPCO.2016.7760620},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252276.pdf},\n}\n\n
\n
\n\n\n
\n The estimation of means of data points lying on the Riemannian manifold of symmetric positive-definite (SPD) matrices is of great utility in classification problems and is currently heavily studied. The power means of SPD matrices with exponent p in the interval [-1, 1] interpolate in between the Harmonic (p = -1) and the Arithmetic mean (p = 1), while the Geometric (Karcher) mean corresponds to their limit evaluated at 0. In this article we present a simple fixed point algorithm for estimating means along this whole continuum. The convergence rate of the proposed algorithm for p = ±0.5 deteriorates very little with the number and dimension of points given as input. Along the whole continuum it is also robust with respect to the dispersion of the points on the manifold. Thus, the proposed algorithm allows the efficient estimation of the whole family of power means, including the geometric mean.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic pigment identification on roman Egyptian paintings by using sparse modeling of hyperspectral images.\n \n \n \n \n\n\n \n Rohani, N.; Salvant, J.; Bahaadini, S.; Cossairt, O.; Walton, M.; and Katsaggelos, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2111-2115, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760621,\n  author = {N. Rohani and J. Salvant and S. Bahaadini and O. Cossairt and M. Walton and A. Katsaggelos},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic pigment identification on roman Egyptian paintings by using sparse modeling of hyperspectral images},\n  year = {2016},\n  pages = {2111-2115},\n  abstract = {In this paper, we study the problem of automatic identification of pigments applied to paintings using hyperspectral reflectance data. Here, we cast the problem of pigment identification in a novel way by decomposing the spectrum into pure pigments. The pure pigment exemplars, chosen and prepared in our laboratory based on historic sources and archaeological examples, closely resemble the materials used to make ancient paintings. To validate our algorithm, we created a set of mock-up paintings in our laboratory consisting of a broad palette of mixtures of pure pigments. Our results clearly demonstrate more accurate estimation of pigment composition than purely distance-based methods such as spectral angle mapping (SAM) and spectral correlation mapping (SCM). In addition, we studied hyperspectral imagery acquired of a Roman-Egyptian portrait, excavated from the site of Tebtunis in the Fayum region of Egypt, and dated to about the 2nd century CE. Using ground truth information obtained using Raman spectroscopy, we show qualitatively that our method accurately detects pigment composition for the specific pigments hematite and indigo.},\n  keywords = {archaeology;art;correlation methods;history;hyperspectral imaging;image processing;painting;pigments;Raman spectroscopy;reflectivity;spectral analysis;automatic pigment identification;Roman Egyptian paintings;sparse modeling;hyperspectral images;hyperspectral reflectance data;spectrum decomposition;pure pigment exemplars;historic sources;archaeological examples;ancient paintings;pigment composition estimation;spectral angle mapping;SAM;spectral correlation mapping;SCM;Roman-Egyptian portrait;Tebtunis;Fayum region;Raman spectroscopy;hematite pigment;indigo pigment;Pigments;Dictionaries;Painting;Libraries;Hyperspectral imaging;Image color analysis;Signal processing algorithms},\n  doi = {10.1109/EUSIPCO.2016.7760621},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252378.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we study the problem of automatic identification of pigments applied to paintings using hyperspectral reflectance data. Here, we cast the problem of pigment identification in a novel way by decomposing the spectrum into pure pigments. The pure pigment exemplars, chosen and prepared in our laboratory based on historic sources and archaeological examples, closely resemble the materials used to make ancient paintings. To validate our algorithm, we created a set of mock-up paintings in our laboratory consisting of a broad palette of mixtures of pure pigments. Our results clearly demonstrate more accurate estimation of pigment composition than purely distance-based methods such as spectral angle mapping (SAM) and spectral correlation mapping (SCM). In addition, we studied hyperspectral imagery acquired of a Roman-Egyptian portrait, excavated from the site of Tebtunis in the Fayum region of Egypt, and dated to about the 2nd century CE. Using ground truth information obtained using Raman spectroscopy, we show qualitatively that our method accurately detects pigment composition for the specific pigments hematite and indigo.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Secure matching of dutch car license plates.\n \n \n \n \n\n\n \n Sunil, A. B.; Erkin, Z.; and Veugen, T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2116-2120, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SecurePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760622,\n  author = {A. B. Sunil and Z. Erkin and T. Veugen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Secure matching of dutch car license plates},\n  year = {2016},\n  pages = {2116-2120},\n  abstract = {License plate matching plays an important role in applications like law enforcement, traffic management and road pricing, where the plate is first recognized and then compared to a database of authorized vehicle registration plates. Unfortunately, there are several privacy related issues that should be taken care of before deploying plate recognition systems. As a scientific solution to privacy concerns, we propose a simple and accurate character recognition scheme combined with an integer matching scheme that is designed to work with encrypted license plates. Our analysis and experimental results show that the deployment of such a system can be deemed possible.},\n  keywords = {automobiles;cryptography;data privacy;image matching;optical character recognition;secure dutch car license plate matching;law enforcement;traffic management;road pricing;authorized vehicle registration plates;privacy related issues;plate recognition systems;character recognition;integer matching;encrypted license plates;Licenses;Databases;Encryption;Protocols;Vehicles;License plate matching;character recognition;secure signal processing;cryptography;homomorphic encryption},\n  doi = {10.1109/EUSIPCO.2016.7760622},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256236.pdf},\n}\n\n
\n
\n\n\n
\n License plate matching plays an important role in applications like law enforcement, traffic management and road pricing, where the plate is first recognized and then compared to a database of authorized vehicle registration plates. Unfortunately, there are several privacy related issues that should be taken care of before deploying plate recognition systems. As a scientific solution to privacy concerns, we propose a simple and accurate character recognition scheme combined with an integer matching scheme that is designed to work with encrypted license plates. Our analysis and experimental results show that the deployment of such a system can be deemed possible.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n LMS estimation of signals defined over graphs.\n \n \n \n \n\n\n \n Di Lorenzo, P.; Barbarossa, S.; Banelli, P.; and Sardellitti, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2121-2125, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"LMSPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760623,\n  author = {P. {Di Lorenzo} and S. Barbarossa and P. Banelli and S. Sardellitti},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {LMS estimation of signals defined over graphs},\n  year = {2016},\n  pages = {2121-2125},\n  abstract = {The aim of this paper is to propose a least mean squares (LMS) strategy for adaptive estimation of signals defined over graphs. Assuming the graph signal to be band-limited, over a known bandwidth, the method enables reconstruction, with guaranteed performance in terms of mean-square error, and tracking from a limited number of observations sampled over a subset of vertices. A detailed mean square analysis provides the performance of the proposed method, and leads to several insights for designing useful sampling strategies for graph signals. Numerical results validate our theoretical findings, and illustrate the advantages achieved by the proposed strategy for online estimation of band-limited graph signals.},\n  keywords = {adaptive estimation;graph theory;least mean squares methods;signal reconstruction;signal sampling;signal LMS estimation;least mean square strategy;signals adaptive estimation;signal reconstruction;mean square error;vertice subset;graph signal sampling strategy;band-limited graph signal online estimation;Signal processing;Estimation;Signal processing algorithms;Laplace equations;Symmetric matrices;Europe;Stability analysis;Least mean squares estimation;graph signal processing;sampling on graphs},\n  doi = {10.1109/EUSIPCO.2016.7760623},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256205.pdf},\n}\n\n
\n
\n\n\n
\n The aim of this paper is to propose a least mean squares (LMS) strategy for adaptive estimation of signals defined over graphs. Assuming the graph signal to be band-limited, over a known bandwidth, the method enables reconstruction, with guaranteed performance in terms of mean-square error, and tracking from a limited number of observations sampled over a subset of vertices. A detailed mean square analysis provides the performance of the proposed method, and leads to several insights for designing useful sampling strategies for graph signals. Numerical results validate our theoretical findings, and illustrate the advantages achieved by the proposed strategy for online estimation of band-limited graph signals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Discrete bessel functions for representing the class of finite duration decaying sequences.\n \n \n \n \n\n\n \n Biagetti, G.; Crippa, P.; Falaschetti, L.; and Turchetti, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2126-2130, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DiscretePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760624,\n  author = {G. Biagetti and P. Crippa and L. Falaschetti and C. Turchetti},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Discrete bessel functions for representing the class of finite duration decaying sequences},\n  year = {2016},\n  pages = {2126-2130},\n  abstract = {Bessel functions have shown to be particularly suitable for representing certain classes of signals, since using these basis functions may results in fewer components than using sinusoids. However, as there are no closed form expressions available for such functions, approximations and numerical methods have been adopted for their computation. In this paper the functions called discrete Bessel functions that are expressed as a finite expansion are defined. It is shown that in a finite interval a finite number of such functions that perfectly match Bessel functions of integer order exist. For finite duration sequences it is proven that the subspace spanned by a set of these functions is able to represent the class of finite duration decaying sequences.},\n  keywords = {Bessel functions;numerical analysis;signal classification;discrete Bessel functions;finite duration decaying sequences;Europe;Signal processing;Indexes;Matrix decomposition;Closed-form solutions;Discrete Fourier transforms},\n  doi = {10.1109/EUSIPCO.2016.7760624},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256206.pdf},\n}\n\n
\n
\n\n\n
\n Bessel functions have shown to be particularly suitable for representing certain classes of signals, since using these basis functions may results in fewer components than using sinusoids. However, as there are no closed form expressions available for such functions, approximations and numerical methods have been adopted for their computation. In this paper the functions called discrete Bessel functions that are expressed as a finite expansion are defined. It is shown that in a finite interval a finite number of such functions that perfectly match Bessel functions of integer order exist. For finite duration sequences it is proven that the subspace spanned by a set of these functions is able to represent the class of finite duration decaying sequences.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Signal and system spaces with non-convergent sampling representation.\n \n \n \n \n\n\n \n Boche, H.; and Mönich, U. J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2131-2135, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SignalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760625,\n  author = {H. Boche and U. J. Mönich},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Signal and system spaces with non-convergent sampling representation},\n  year = {2016},\n  pages = {2131-2135},\n  abstract = {The approximation of linear time-invariant (LTI) systems by sampling series is a central task of signal processing. For the Paley-Wiener space PW1π of bandlimited signals with absolutely integrable Fourier transform, it is known that there exist signals and stable LTI systems such that the canonical approximation process diverges. In this paper we analyze the structure of the sets of signals and systems creating divergence and show that both sets are jointly spaceable, i.e., contain subsets such that every linear combination of signals and systems from these subsets, which is not the zero element, leads to divergence. In signal processing applications the linear structure of the involved signal spaces is essential. Here, we show that the same linear structure also holds for the sets of signals and systems creating divergence.},\n  keywords = {Fourier transforms;signal representation;signal sampling;nonconvergent sampling representation;linear time-invariant systems;sampling series;signal processing;Paley-Wiener space;bandlimited signals;absolutely integrable Fourier transform;signal spaces;Linear systems;Signal processing;Convergence;Europe;Fourier transforms;Linearity;Fourier series},\n  doi = {10.1109/EUSIPCO.2016.7760625},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256261.pdf},\n}\n\n
\n
\n\n\n
\n The approximation of linear time-invariant (LTI) systems by sampling series is a central task of signal processing. For the Paley-Wiener space PW1π of bandlimited signals with absolutely integrable Fourier transform, it is known that there exist signals and stable LTI systems such that the canonical approximation process diverges. In this paper we analyze the structure of the sets of signals and systems creating divergence and show that both sets are jointly spaceable, i.e., contain subsets such that every linear combination of signals and systems from these subsets, which is not the zero element, leads to divergence. In signal processing applications the linear structure of the involved signal spaces is essential. Here, we show that the same linear structure also holds for the sets of signals and systems creating divergence.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Error characterization of duty cycle estimation for sampled non-band-limited pulse signals with finite observation period.\n \n \n \n \n\n\n \n Bernhard, H.; Etzlinger, B.; and Springer, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2136-2140, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ErrorPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760626,\n  author = {H. Bernhard and B. Etzlinger and A. Springer},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Error characterization of duty cycle estimation for sampled non-band-limited pulse signals with finite observation period},\n  year = {2016},\n  pages = {2136-2140},\n  abstract = {In many applications the pulse duration of a periodic pulse signal is the parameter of interest. Thereby, the non-band-limited pulse signal is sampled during a finite observation period yielding to aliasing and windowing effects, respectively. In this work, the pulse duration estimation based on the mean value of the samples is considered, and an exact expression of the mean squared estimation error (averaged over all possible time shifts) is derived. The resulting mean squared error expression depends on the observation period, the pulse period and the pulse duration. Analyzing the effect of these parameters shows that the mean squared error can be reduced (i) if the observation period is a multiple of the pulse period, (ii) if the pulse period is not a multiple of the sampling period, and (iii) if the total number of samples is a prime number. All results were validated with simulation results.},\n  keywords = {estimation theory;mean square error methods;signal processing;error characterization;duty cycle estimation;sampled nonband-limited pulse signals;finite observation period;periodic pulse signal;aliasing effect;windowing effect;pulse duration estimation;mean squared estimation error;mean squared error expression;Frequency-domain analysis;Estimation error;Signal processing;Europe;Ultrasonic variables measurement;Pulse measurements;sampling process;band width;signal reconstruction;sampling error;wireless sensor networks (WSN);synchronization;localization;ultrasonic},\n  doi = {10.1109/EUSIPCO.2016.7760626},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256349.pdf},\n}\n\n
\n
\n\n\n
\n In many applications the pulse duration of a periodic pulse signal is the parameter of interest. Thereby, the non-band-limited pulse signal is sampled during a finite observation period yielding to aliasing and windowing effects, respectively. In this work, the pulse duration estimation based on the mean value of the samples is considered, and an exact expression of the mean squared estimation error (averaged over all possible time shifts) is derived. The resulting mean squared error expression depends on the observation period, the pulse period and the pulse duration. Analyzing the effect of these parameters shows that the mean squared error can be reduced (i) if the observation period is a multiple of the pulse period, (ii) if the pulse period is not a multiple of the sampling period, and (iii) if the total number of samples is a prime number. All results were validated with simulation results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Backtesting expected shortfall with a skewed exponential power distribution in electricity markets.\n \n \n \n \n\n\n \n Thibault, A.; and Bondon, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2141-2145, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"BacktestingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760627,\n  author = {A. Thibault and P. Bondon},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Backtesting expected shortfall with a skewed exponential power distribution in electricity markets},\n  year = {2016},\n  pages = {2141-2145},\n  abstract = {Interest in risk measurement for spot price has increased since the worldwide deregulation and liberalization of electricity started in the early 90's. This paper is focused on quantifying risk for the Swedish spot price. Our analysis is based on a generalized autoregressive conditional heteroskedastic (GARCH) process with skewed exponential power innovations to model the stochastic component of the price. A Expected Shortfall (ES) backtesting procedure is presented and our model performance is compared to commonly used distributions in risk measurement. We show that the skewed exponential power distribution outperforms the competitors for the upside risk, which is of high interest as electricity spot prices are positively skewed.},\n  keywords = {autoregressive processes;power distribution economics;power markets;risk management;skewed exponential power distribution;electricity markets;risk measurement;Swedish spot price;generalized autoregressive conditional heteroskedastic process;GARCH process;stochastic component;expected shortfall backtesting procedure;ES backtesting procedure;electricity spot prices;Reactive power;Autoregressive processes;Computational modeling;Technological innovation;Market research;Correlation;Signal processing;Electricity markets;Tail dependence;Asymmetric distributions;Expected shortfall},\n  doi = {10.1109/EUSIPCO.2016.7760627},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256356.pdf},\n}\n\n
\n
\n\n\n
\n Interest in risk measurement for spot price has increased since the worldwide deregulation and liberalization of electricity started in the early 90's. This paper is focused on quantifying risk for the Swedish spot price. Our analysis is based on a generalized autoregressive conditional heteroskedastic (GARCH) process with skewed exponential power innovations to model the stochastic component of the price. A Expected Shortfall (ES) backtesting procedure is presented and our model performance is compared to commonly used distributions in risk measurement. We show that the skewed exponential power distribution outperforms the competitors for the upside risk, which is of high interest as electricity spot prices are positively skewed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nonlinear blind source separation for chemical sensor arrays based on a polynomial representation.\n \n \n \n \n\n\n \n Ando, R. A.; Jutten, C.; Rivet, B.; Attux, R.; and Duarte, L. T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2146-2150, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"NonlinearPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760628,\n  author = {R. A. Ando and C. Jutten and B. Rivet and R. Attux and L. T. Duarte},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Nonlinear blind source separation for chemical sensor arrays based on a polynomial representation},\n  year = {2016},\n  pages = {2146-2150},\n  abstract = {In this paper we propose an extension of a blind source separation algorithm that can be used to process the data obtained by an array of ion-selective electrodes to measure the ionic activity of different ions in an aqueous solution. While a previous algorithm used a polynomial approximation of the mixing model and mutual information as means of estimating the mixture coefficients, it only worked for a constrained configuration of two sources with the same ionic valence. Our proposed method is able to generalize it to any number of sources and any type of ions, and is therefore able to solve the problem for any configuration. Simulations show good results for the analyzed application.},\n  keywords = {array signal processing;blind source separation;chemical sensors;polynomial approximation;nonlinear blind source separation;chemical sensor arrays;polynomial representation;ion selective electrodes;ionic activity measurement;aqueous solution;mixture coefficient;Ions;Mathematical model;Blind source separation;Signal processing algorithms;Correlation;Sensor arrays},\n  doi = {10.1109/EUSIPCO.2016.7760628},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256487.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we propose an extension of a blind source separation algorithm that can be used to process the data obtained by an array of ion-selective electrodes to measure the ionic activity of different ions in an aqueous solution. While a previous algorithm used a polynomial approximation of the mixing model and mutual information as means of estimating the mixture coefficients, it only worked for a constrained configuration of two sources with the same ionic valence. Our proposed method is able to generalize it to any number of sources and any type of ions, and is therefore able to solve the problem for any configuration. Simulations show good results for the analyzed application.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n New robust algorithms for sparse non-negative three-way tensor decompositions.\n \n \n \n \n\n\n \n Nguyen, V.; Abed-Meraim, K.; and Linh-Trung, N.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2151-2155, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"NewPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760629,\n  author = {V. Nguyen and K. Abed-Meraim and N. Linh-Trung},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {New robust algorithms for sparse non-negative three-way tensor decompositions},\n  year = {2016},\n  pages = {2151-2155},\n  abstract = {Tensor decomposition is an important tool for many applications in diverse disciplines such as signal processing, chemo-metrics, numerical linear algebra and data mining. In this work, we focus on PARAFAC and Tucker decompositions of three-way tensors with non-negativity and/or sparseness constraints. By using an all-at-once optimization approach, we propose two decomposition algorithms which are robust to tensor order over-estimation errors, a desired practical property when the tensor rank is unknown. Different algorithm versions are proposed depending on the desired constraint (or property) of the tensor factors or the core tensor. Finally, the performance of the algorithms is assessed via insightful simulation experiments on both simulated and real-life data.},\n  keywords = {matrix decomposition;optimisation;signal processing;tensors;robust algorithm;sparse nonnegative three-way tensor tucker decomposition;PARAFAC;all-at-once optimization approach;Europe;Signal processing;Conferences;CANDECOMP/PARAFAC;Tucker decomposition;all-at-once approach;sparsity;non-negativity},\n  doi = {10.1109/EUSIPCO.2016.7760629},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570258676.pdf},\n}\n\n
\n
\n\n\n
\n Tensor decomposition is an important tool for many applications in diverse disciplines such as signal processing, chemo-metrics, numerical linear algebra and data mining. In this work, we focus on PARAFAC and Tucker decompositions of three-way tensors with non-negativity and/or sparseness constraints. By using an all-at-once optimization approach, we propose two decomposition algorithms which are robust to tensor order over-estimation errors, a desired practical property when the tensor rank is unknown. Different algorithm versions are proposed depending on the desired constraint (or property) of the tensor factors or the core tensor. Finally, the performance of the algorithms is assessed via insightful simulation experiments on both simulated and real-life data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Combining the glottal mixture model (GLOMM) with UBM for speaker recognition.\n \n \n \n \n\n\n \n Baggenstoss, P. M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2156-2160, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CombiningPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760630,\n  author = {P. M. Baggenstoss},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Combining the glottal mixture model (GLOMM) with UBM for speaker recognition},\n  year = {2016},\n  pages = {2156-2160},\n  abstract = {We present an iterative algorithm to extract the voice source waveform from recordings of speech for speaker identification. The method detects glottal closings, then constructs a speaker-dependent library of glottal pulse waveforms by clustering data windows centered on the linear prediction error time-series at the glottal closures. With the voice source modeled as scaled and shifted glottal pulses, the algorithm iteratively determines the vocal tract parameters in each frame. In experiments, we combine the extracted voice source information with a universal background model (UBM). Using the TIMIT data corpus and a 200-speaker population size, we demonstrate a factor of three speaker recognition error reduction.},\n  keywords = {iterative methods;mixture models;pattern clustering;speaker recognition;time series;glottal mixture model;GLOMM;UBM;iterative algorithm;speaker identification;glottal closings;speaker-dependent library;glottal pulse waveforms;data windows clustering;linear prediction error time-series;shifted glottal pulses;scaled glottal pulses;vocal tract parameters;voice source information;universal background model;TIMIT data corpus;speaker recognition error reduction;Europe;Signal processing;Conferences},\n  doi = {10.1109/EUSIPCO.2016.7760630},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251530.pdf},\n}\n\n
\n
\n\n\n
\n We present an iterative algorithm to extract the voice source waveform from recordings of speech for speaker identification. The method detects glottal closings, then constructs a speaker-dependent library of glottal pulse waveforms by clustering data windows centered on the linear prediction error time-series at the glottal closures. With the voice source modeled as scaled and shifted glottal pulses, the algorithm iteratively determines the vocal tract parameters in each frame. In experiments, we combine the extracted voice source information with a universal background model (UBM). Using the TIMIT data corpus and a 200-speaker population size, we demonstrate a factor of three speaker recognition error reduction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detecting pedestrians in surveillance videos based on convolutional neural network and motion.\n \n \n \n \n\n\n \n Varga, D.; and Szirányi, T.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2161-2165, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DetectingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760631,\n  author = {D. Varga and T. Szirányi},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Detecting pedestrians in surveillance videos based on convolutional neural network and motion},\n  year = {2016},\n  pages = {2161-2165},\n  abstract = {Pedestrian detection is a fundamental computer vision task with many practical applications in robotics, video surveillance, autonomous driving, and automotive safety. However, it is still a challenging problem due to the tremendous variations in illumination, clothing, color, scale, and pose. The aim of this paper to present our dynamic pedestrian detector. In this paper, we propose a pedestrian detection approach that uses convolutional neural network (CNN) to differentiate pedestrian and non-pedestrian motion patterns. Although the CNN has good generalization performance, the CNN classifier is time-consuming. Therefore, we propose a novel architecture to reduce the time of feature extraction and training. Occlusion handling is one of the most important problem in pedestrian detection. For occlusion handling, we propose a method, which consists of extensive part detectors. The main advantage of our algorithm is that it can be trained on weakly labeled data, i.e. it does not require part annotations in the pedestrian bounding boxes.},\n  keywords = {computer vision;feature extraction;neural nets;pedestrians;video surveillance;video surveillance;convolutional neural network;pedestrian detection;computer vision;feature extraction;occlusion handling;Detectors;Training;Feature extraction;Convolution;Neurons;Videos;Computer architecture},\n  doi = {10.1109/EUSIPCO.2016.7760631},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256220.pdf},\n}\n\n
\n
\n\n\n
\n Pedestrian detection is a fundamental computer vision task with many practical applications in robotics, video surveillance, autonomous driving, and automotive safety. However, it is still a challenging problem due to the tremendous variations in illumination, clothing, color, scale, and pose. The aim of this paper to present our dynamic pedestrian detector. In this paper, we propose a pedestrian detection approach that uses convolutional neural network (CNN) to differentiate pedestrian and non-pedestrian motion patterns. Although the CNN has good generalization performance, the CNN classifier is time-consuming. Therefore, we propose a novel architecture to reduce the time of feature extraction and training. Occlusion handling is one of the most important problem in pedestrian detection. For occlusion handling, we propose a method, which consists of extensive part detectors. The main advantage of our algorithm is that it can be trained on weakly labeled data, i.e. it does not require part annotations in the pedestrian bounding boxes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust visual tracking using dynamic feature weighting based on multiple dictionary learning.\n \n \n \n \n\n\n \n Liu, R.; Lan, X.; Yuen, P. C.; and Feng, G. C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2166-2170, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760632,\n  author = {R. Liu and X. Lan and P. C. Yuen and G. C. Feng},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust visual tracking using dynamic feature weighting based on multiple dictionary learning},\n  year = {2016},\n  pages = {2166-2170},\n  abstract = {Using multiple features in appearance modeling has shown to be effective for visual tracking. In this paper, we dynamically measured the importance of different features and proposed a robust tracker with the weighted features. By doing this, the dictionaries are improved in both reconstructive and discriminative way. We extracted multiple features of the target, and obtained multiple sparse representations, which plays an essential role in the classification issue. After learning independent dictionaries for each feature, we then implement weights to each feature dynamically, with which we select the best candidate by a weighted joint decision measure. Experiments have shown that our method outperforms several recently proposed trackers.},\n  keywords = {signal processing;robust visual tracking;dynamic feature weighting;multiple dictionary learning;appearance modeling;robust tracker;multiple sparse representations;independent dictionaries;weighted joint decision measure;Dictionaries;Target tracking;Feature extraction;Mathematical model;Robustness;Weight measurement;Signal processing algorithms;visual tracking;feature weighting;sparse coding;dictionary learning},\n  doi = {10.1109/EUSIPCO.2016.7760632},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252444.pdf},\n}\n\n
\n
\n\n\n
\n Using multiple features in appearance modeling has shown to be effective for visual tracking. In this paper, we dynamically measured the importance of different features and proposed a robust tracker with the weighted features. By doing this, the dictionaries are improved in both reconstructive and discriminative way. We extracted multiple features of the target, and obtained multiple sparse representations, which plays an essential role in the classification issue. After learning independent dictionaries for each feature, we then implement weights to each feature dynamically, with which we select the best candidate by a weighted joint decision measure. Experiments have shown that our method outperforms several recently proposed trackers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Image denoising via group sparse eigenvectors of Graph Laplacian.\n \n \n \n \n\n\n \n Tang, Y.; Chen, Y.; Xu, N.; Jiang, A.; and Zhou, L.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2171-2175, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ImagePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760633,\n  author = {Y. Tang and Y. Chen and N. Xu and A. Jiang and L. Zhou},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Image denoising via group sparse eigenvectors of Graph Laplacian},\n  year = {2016},\n  pages = {2171-2175},\n  abstract = {In this paper, a group sparse model using Eigenvectors of the Graph Laplacian (EGL) is proposed for image denoising. Unlike the heuristic setting for each image and for each noise deviation in the traditional denoising method via the EGL, in our group-sparse-based method, the used eigenvectors are adaptively selected with the error control. Sequentially, a modified group orthogonal matching pursuit algorithm is developed to efficiently solve the optimal problem in this group sparse model. The experiments show that our method can achieve a better performance than some well-developed denoising methods, especially in the noise of large deviations and in the SSIM measure.},\n  keywords = {image denoising;iterative methods;time-frequency analysis;image denoising;group sparse Eigenvectors;eigenvectors of the graph Laplacian;heuristic setting;noise deviation;denoising method;error control;modified group orthogonal matching pursuit algorithm;group sparse model;denoising methods;Sparse matrices;Noise reduction;Noise measurement;Laplace equations;Image denoising;Matching pursuit algorithms;Signal processing algorithms},\n  doi = {10.1109/EUSIPCO.2016.7760633},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255675.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a group sparse model using Eigenvectors of the Graph Laplacian (EGL) is proposed for image denoising. Unlike the heuristic setting for each image and for each noise deviation in the traditional denoising method via the EGL, in our group-sparse-based method, the used eigenvectors are adaptively selected with the error control. Sequentially, a modified group orthogonal matching pursuit algorithm is developed to efficiently solve the optimal problem in this group sparse model. The experiments show that our method can achieve a better performance than some well-developed denoising methods, especially in the noise of large deviations and in the SSIM measure.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Canonical polyadic decomposition of hyperspectral patch tensors.\n \n \n \n \n\n\n \n Veganzones, M. A.; Cohen, J. E.; Farias, R. C.; Usevich, K.; Drumetz, L.; Chanussot, J.; and Comon, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2176-2180, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CanonicalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760634,\n  author = {M. A. Veganzones and J. E. Cohen and R. C. Farias and K. Usevich and L. Drumetz and J. Chanussot and P. Comon},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Canonical polyadic decomposition of hyperspectral patch tensors},\n  year = {2016},\n  pages = {2176-2180},\n  abstract = {Spectral unmixing (SU) is one of the most important and studied topics in hyperspectral image analysis. By means of spectral unmixing it is possible to decompose a hyperspectral image in its spectral components, the so-called endmembers, and their respective fractional spatial distributions, so-called abundance maps. The Canonical Polyadic (CP) tensor decomposition has proved to be a powerful tool to decompose a tensor data onto a few rank-one terms in a multilinear fashion. Here, we establish the connection between the CP decomposition and the SU problem when the tensor data is built by stacking small patches of the hyperspectral image. It turns out that the CP decomposition of this hyperspectral patch-tensor is equivalent to solving a blind regularized Extended Linear Mixing Model (ELMM).},\n  keywords = {hyperspectral imaging;image representation;matrix algebra;ELMM;blind regularized extended linear mixing model;canonical polyadic tensor decomposition;fractional spatial distributions;hyperspectral image analysis;spectral unmixing;hyperspectral patch tensors;Tensile stress;Hyperspectral imaging;Matrix decomposition;Europe;Signal processing algorithms;Signal processing;Stacking;Spectral unmixing;extended linear mixing model;Canonical Polyadic;nonnegative tensor decomposition;patch tensor},\n  doi = {10.1109/EUSIPCO.2016.7760634},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255787.pdf},\n}\n\n
\n
\n\n\n
\n Spectral unmixing (SU) is one of the most important and studied topics in hyperspectral image analysis. By means of spectral unmixing it is possible to decompose a hyperspectral image in its spectral components, the so-called endmembers, and their respective fractional spatial distributions, so-called abundance maps. The Canonical Polyadic (CP) tensor decomposition has proved to be a powerful tool to decompose a tensor data onto a few rank-one terms in a multilinear fashion. Here, we establish the connection between the CP decomposition and the SU problem when the tensor data is built by stacking small patches of the hyperspectral image. It turns out that the CP decomposition of this hyperspectral patch-tensor is equivalent to solving a blind regularized Extended Linear Mixing Model (ELMM).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On scatter matrix estimation in the presence of unknown extra parameters: Mismatched scenario.\n \n \n \n \n\n\n \n Fortunati, S.; Gini, F.; and Greco, M. S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2181-2185, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760635,\n  author = {S. Fortunati and F. Gini and M. S. Greco},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On scatter matrix estimation in the presence of unknown extra parameters: Mismatched scenario},\n  year = {2016},\n  pages = {2181-2185},\n  abstract = {In this paper, a Constrained Mismatched Maximum Likelihood (CMML) estimator for the joint estimation of the scatter matrix and the power of Complex Elliptically Symmetric (CES) distributed vectors is derived under misspecified data models. Specifically, this estimator is obtained by assuming a Normal model while the data are sampled from a complex t-distribution. The convergence point of such CMML estimator is investigated and its Mean Square Error (MSE) compared with the Constrained Misspecified Cramér-Rao Bound (CMCRB).},\n  keywords = {maximum likelihood estimation;radar theory;S-matrix theory;statistical distributions;vectors;scatter matrix estimation;constrained mismatched maximum likelihood estimator;complex elliptically symmetric distributed vectors;CES distributed vectors;data models;normal model;complex t-distribution;CMML estimator;mean square error;MSE;constrained misspecified Cramer-Rao bound;CMCRB;Data models;Covariance matrices;Maximum likelihood estimation;Generators;Convergence;Shape;Misspecified model;Covariance estimation;Constrained Maximum likelihood;Cramér-Rao Bound},\n  doi = {10.1109/EUSIPCO.2016.7760635},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570253105.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a Constrained Mismatched Maximum Likelihood (CMML) estimator for the joint estimation of the scatter matrix and the power of Complex Elliptically Symmetric (CES) distributed vectors is derived under misspecified data models. Specifically, this estimator is obtained by assuming a Normal model while the data are sampled from a complex t-distribution. The convergence point of such CMML estimator is investigated and its Mean Square Error (MSE) compared with the Constrained Misspecified Cramér-Rao Bound (CMCRB).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Block majorization-minimization algorithms for low-rank clutter subspace estimation.\n \n \n \n \n\n\n \n Breloy, A.; Sun, Y.; Babu, P.; and Palomar, D. P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2186-2190, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"BlockPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760636,\n  author = {A. Breloy and Y. Sun and P. Babu and D. P. Palomar},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Block majorization-minimization algorithms for low-rank clutter subspace estimation},\n  year = {2016},\n  pages = {2186-2190},\n  abstract = {This paper addresses the problem of the clutter subspace projector estimation in the context of a disturbance composed of a low rank heterogeneous (Compound Gaussian) clutter and white Gaussian noise. We derive two algorithms based on the block majorization-minimization framework to reach the maximum likelihood estimator of the considered model. These algorithms are shown to be computationally faster than the state of the art, with guaranteed convergence. Finally, the performance of the related estimators is illustrated in terms of estimation accuracy and computation speed.},\n  keywords = {AWGN;maximum likelihood estimation;minimax techniques;radar clutter;radar signal processing;maximum likelihood estimator;low rank heterogeneous clutter;white Gaussian noise;block majorization-minimization algorithm;low-rank clutter subspace estimation;Clutter;Linear programming;Signal processing algorithms;Maximum likelihood estimation;Partitioning algorithms;Europe;Subspace estimation;Maximum Likelihood Estimator;Low Rank;Compound Gaussian;majorization-minimization},\n  doi = {10.1109/EUSIPCO.2016.7760636},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255505.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the problem of the clutter subspace projector estimation in the context of a disturbance composed of a low rank heterogeneous (Compound Gaussian) clutter and white Gaussian noise. We derive two algorithms based on the block majorization-minimization framework to reach the maximum likelihood estimator of the considered model. These algorithms are shown to be computationally faster than the state of the art, with guaranteed convergence. Finally, the performance of the related estimators is illustrated in terms of estimation accuracy and computation speed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive LASSO based on joint M-estimation of regression and scale.\n \n \n \n \n\n\n \n Ollila, E.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2191-2195, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760637,\n  author = {E. Ollila},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive LASSO based on joint M-estimation of regression and scale},\n  year = {2016},\n  pages = {2191-2195},\n  abstract = {The adaptive Lasso (Least Absolute Shrinkage and Selection Operator) obtains oracle variable selection property by using cleverly chosen adaptive weights for regression coefficients in the ℓ1-penalty. In this paper, in the spirit of M-estimation of regression, we propose a class of adaptive M-Lasso estimates of regression and scale as solutions to generalized zero subgradient equations. The defining estimating equations depend on a differentiable convex loss function and choosing the LS-loss function yields the standard adaptive Lasso estimate and the associated scale statistic. An efficient algorithm, a generalization of the cyclic coordinate descent algorithm, is developed for computing the proposed M-Lasso estimates. We also propose adaptive M-Lasso estimate of regression with preliminary scale estimate that uses a highly-robust bounded loss function. A unique feature of the paper is that we consider complex-valued measurements and regression parameter. Consistent variable selection property of the adaptive M-Lasso estimates are illustrated with a simulation study.},\n  keywords = {gradient methods;least squares approximations;regression analysis;adaptive LASSO;Least Absolute Shrinkage and Selection Operator;oracle variable selection property;regression coefficients;ℓ1-penalty;adaptive M-Lasso estimate;generalized zero subgradient equation;differentiable convex loss function;LS-loss function;scale statistic;cyclic coordinate descent algorithm;complex-valued measurement;regression parameter;Robustness;Standards;Signal processing algorithms;Mathematical model;Signal processing;Input variables;Europe;Adaptive Lasso;M-estimation;penalized regression;sparsity;variable selection},\n  doi = {10.1109/EUSIPCO.2016.7760637},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255568.pdf},\n}\n\n
\n
\n\n\n
\n The adaptive Lasso (Least Absolute Shrinkage and Selection Operator) obtains oracle variable selection property by using cleverly chosen adaptive weights for regression coefficients in the ℓ1-penalty. In this paper, in the spirit of M-estimation of regression, we propose a class of adaptive M-Lasso estimates of regression and scale as solutions to generalized zero subgradient equations. The defining estimating equations depend on a differentiable convex loss function and choosing the LS-loss function yields the standard adaptive Lasso estimate and the associated scale statistic. An efficient algorithm, a generalization of the cyclic coordinate descent algorithm, is developed for computing the proposed M-Lasso estimates. We also propose adaptive M-Lasso estimate of regression with preliminary scale estimate that uses a highly-robust bounded loss function. A unique feature of the paper is that we consider complex-valued measurements and regression parameter. Consistent variable selection property of the adaptive M-Lasso estimates are illustrated with a simulation study.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An M-estimator for robust centroid estimation on the manifold of covariance matrices: Performance analysis and application to image classification.\n \n \n \n \n\n\n \n Ilea, I.; Hajri, H.; Said, S.; Bombrun, L.; Germain, C.; and Berthoumieu, Y.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2196-2200, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760638,\n  author = {I. Ilea and H. Hajri and S. Said and L. Bombrun and C. Germain and Y. Berthoumieu},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {An M-estimator for robust centroid estimation on the manifold of covariance matrices: Performance analysis and application to image classification},\n  year = {2016},\n  pages = {2196-2200},\n  abstract = {Many signal and image processing applications, including texture analysis, radar detection or EEG signal classification, require the computation of a centroid from a set of covariance matrices. The most popular approach consists in considering the center of mass. While efficient, this estimator is not robust to outliers arising from the inherent variability of the data or from faulty measurements. To overcome this, some authors have proposed to use the median as a more robust estimator. Here, we propose an estimator which takes advantage of both efficiency and robustness by combining the concepts of Riemannian center of mass and median. Based on the theory of M-estimators, this robust centroid estimator is issued from the so-called Huber's function. We present a gradient descent algorithm to estimate it. In addition, an experiment on both simulated and real data is carried out to evaluate the influence of outliers on the estimation and classification performances.},\n  keywords = {covariance matrices;estimation theory;gradient methods;image classification;image classification;M-estimator;robust centroid estimation;covariance matrices;performance analysis;texture analysis;radar detection;EEG signal classification;mass center;Riemannian center;robust centroid estimator;Huber function;gradient descent algorithm;Covariance matrices;Estimation;Robustness;Cost function;Europe;Signal processing;Signal processing algorithms},\n  doi = {10.1109/EUSIPCO.2016.7760638},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255712.pdf},\n}\n\n
\n
\n\n\n
\n Many signal and image processing applications, including texture analysis, radar detection or EEG signal classification, require the computation of a centroid from a set of covariance matrices. The most popular approach consists in considering the center of mass. While efficient, this estimator is not robust to outliers arising from the inherent variability of the data or from faulty measurements. To overcome this, some authors have proposed to use the median as a more robust estimator. Here, we propose an estimator which takes advantage of both efficiency and robustness by combining the concepts of Riemannian center of mass and median. Based on the theory of M-estimators, this robust centroid estimator is issued from the so-called Huber's function. We present a gradient descent algorithm to estimate it. In addition, an experiment on both simulated and real data is carried out to evaluate the influence of outliers on the estimation and classification performances.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An approach to joint sequential detection and estimation with distributional uncertainties.\n \n \n \n \n\n\n \n Reinhard, D.; Fauß, M.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2201-2205, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760639,\n  author = {D. Reinhard and M. Fauß and A. M. Zoubir},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {An approach to joint sequential detection and estimation with distributional uncertainties},\n  year = {2016},\n  pages = {2201-2205},\n  abstract = {Joint detection and estimation is an important yet little-studied problem that arises in many signal processing applications. In this paper, a sequential and robust solution approach is presented. To design the test fulfilling constraints on the error probabilities and the quality of the estimate, the problem is converted into an unconstrained form and subsequently solved using Linear Programming. To handle model uncertainties, a band model for both hypotheses is used and a concept for determining the pair of least favorable distributions is adopted to devise a robust detection scheme. For the robust estimation, an upper bound of the estimation cost, based on maximizing a Kullback-Leibler divergence, is derived. The resulting test meets the specifications on the error probabilities and the quality of the estimate for every feasible pair of distributions. Numerical results are provided for the pair of least favorable distributions and for a pair of randomly selected distributions.},\n  keywords = {error statistics;linear programming;sequential estimation;signal detection;joint sequential detection;joint sequential estimation;distributional uncertainties;signal processing;error probabilities;linear programming;Kullback-Leibler divergence;robust solution approach;least favorable distributions;Estimation;Robustness;Error probability;Signal processing;Uncertainty;Upper bound;Europe;band model;distributional uncertainties;joint detection and estimation;linear programming;optimization;robustness;sequential analysis},\n  doi = {10.1109/EUSIPCO.2016.7760639},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256210.pdf},\n}\n\n
\n
\n\n\n
\n Joint detection and estimation is an important yet little-studied problem that arises in many signal processing applications. In this paper, a sequential and robust solution approach is presented. To design the test fulfilling constraints on the error probabilities and the quality of the estimate, the problem is converted into an unconstrained form and subsequently solved using Linear Programming. To handle model uncertainties, a band model for both hypotheses is used and a concept for determining the pair of least favorable distributions is adopted to devise a robust detection scheme. For the robust estimation, an upper bound of the estimation cost, based on maximizing a Kullback-Leibler divergence, is derived. The resulting test meets the specifications on the error probabilities and the quality of the estimate for every feasible pair of distributions. Numerical results are provided for the pair of least favorable distributions and for a pair of randomly selected distributions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast and robust detection of a known pattern in an image.\n \n \n \n \n\n\n \n Denis, L.; Ferrari, A.; Mary, D.; Mugnier, L.; and Thiébaut, E.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2206-2210, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760640,\n  author = {L. Denis and A. Ferrari and D. Mary and L. Mugnier and E. Thiébaut},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast and robust detection of a known pattern in an image},\n  year = {2016},\n  pages = {2206-2210},\n  abstract = {Many image processing applications require to detect a known pattern buried under noise. While maximum correlation can be implemented efficiently using fast Fourier transforms, detection criteria that are robust to the presence of outliers are typically slower by several orders of magnitude. We derive the general expression of a robust detection criterion based on the theory of locally optimal detectors. The expression of the criterion is attractive because it offers a fast implementation based on correlations. Application of this criterion to Cauchy likelihood gives good detection performance in the presence of outliers, as shown in our numerical experiments. Special attention is given to proper normalization of the criterion in order to account for truncation at the image borders and noise with a non-stationary dispersion.},\n  keywords = {fast Fourier transforms;image processing;maximum likelihood detection;object detection;image processing applications;robust known pattern detection;maximum correlation;fast Fourier transform;Cauchy likelihood;nonstationary dispersion;image noise;image border;image truncation;Robustness;Detectors;Correlation;Mathematical model;Europe;Dispersion;robust detection;locally most powerful test (LMP);Cauchy distribution},\n  doi = {10.1109/EUSIPCO.2016.7760640},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256301.pdf},\n}\n\n
\n
\n\n\n
\n Many image processing applications require to detect a known pattern buried under noise. While maximum correlation can be implemented efficiently using fast Fourier transforms, detection criteria that are robust to the presence of outliers are typically slower by several orders of magnitude. We derive the general expression of a robust detection criterion based on the theory of locally optimal detectors. The expression of the criterion is attractive because it offers a fast implementation based on correlations. Application of this criterion to Cauchy likelihood gives good detection performance in the presence of outliers, as shown in our numerical experiments. Special attention is given to proper normalization of the criterion in order to account for truncation at the image borders and noise with a non-stationary dispersion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cognitive tracking in IEEE 802.22 Symbiotic Radars.\n \n \n \n \n\n\n \n Stinco, P.; Greco, M. S.; Gini, F.; and Himed, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2211-2214, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CognitivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760641,\n  author = {P. Stinco and M. S. Greco and F. Gini and B. Himed},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Cognitive tracking in IEEE 802.22 Symbiotic Radars},\n  year = {2016},\n  pages = {2211-2214},\n  abstract = {This paper focuses on a Symbiotic Radar, that is a Passive Radar which is an integral part of a communication network. The Symbiotic Radar exploits the signals of opportunity emitted by the Base Station (BS) and the Customer Premise Equipments (CPE) of an IEEE 802.22 WRAN. The radar is linked to the BS and suggests the best CPEs that must be scheduled to transmit. This selection is performed by a cognitive passive tracking algorithm that exploits the feedback information contained in the target state prediction to improve the tracking performance. The proposed algorithm has been designed with the consideration that the communication capabilities of the whole network must be preserved.},\n  keywords = {cognitive radio;passive radar;radar tracking;wireless regional area networks;IEEE 802.22 symbiotic radar;passive radar;base station;BS;customer premise equipment;CPE;IEEE 802.22 WRAN;cognitive passive tracking algorithm;feedback information;target state prediction;wireless regional area network;Radar tracking;Target tracking;Passive radar;Transmitters;Symbiosis;Signal processing algorithms;Cognitive Tracking;Cognitive Radio;Passive Radar;Symbiotic Radar;IEEE 802.22;ComRadE system},\n  doi = {10.1109/EUSIPCO.2016.7760641},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570248319.pdf},\n}\n\n
\n
\n\n\n
\n This paper focuses on a Symbiotic Radar, that is a Passive Radar which is an integral part of a communication network. The Symbiotic Radar exploits the signals of opportunity emitted by the Base Station (BS) and the Customer Premise Equipments (CPE) of an IEEE 802.22 WRAN. The radar is linked to the BS and suggests the best CPEs that must be scheduled to transmit. This selection is performed by a cognitive passive tracking algorithm that exploits the feedback information contained in the target state prediction to improve the tracking performance. The proposed algorithm has been designed with the consideration that the communication capabilities of the whole network must be preserved.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cognitive MIMO radars: An information theoretic constrained code design method.\n \n \n \n \n\n\n \n Naghsh, M. M.; Alian, E. H. M.; Hashemi, M. M.; and Nayebi, M. M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2215-2219, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CognitivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760642,\n  author = {M. M. Naghsh and E. H. M. Alian and M. M. Hashemi and M. M. Nayebi},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Cognitive MIMO radars: An information theoretic constrained code design method},\n  year = {2016},\n  pages = {2215-2219},\n  abstract = {A novel information theoretic code design approach for cognitive multiple-input multiple-output (MIMO) radars is proposed in the current paper. In the suggested solution, we consider an agile multi-antenna radar with spectral cognition and design optimized transmission codes considering practical limitations such as peak-to-average power ratio (PAR) and spectral coexistence in a spectrally crowded environment. As exact performance expressions for the detector is not analytically tractable in this case, an information theoretic approach is taken into account to design the transmission codes. Simulation results illustrate the efficiency of the proposed method.},\n  keywords = {antenna arrays;cognitive radio;encoding;MIMO radar;radar antennas;cognitive MIMO radar;information theoretic code design approach;cognitive multiple-input multiple-output radar;agile multiantenna radar;spectral cognition;MIMO radar;Clutter;Peak to average power ratio;Measurement;MIMO},\n  doi = {10.1109/EUSIPCO.2016.7760642},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255871.pdf},\n}\n\n
\n
\n\n\n
\n A novel information theoretic code design approach for cognitive multiple-input multiple-output (MIMO) radars is proposed in the current paper. In the suggested solution, we consider an agile multi-antenna radar with spectral cognition and design optimized transmission codes considering practical limitations such as peak-to-average power ratio (PAR) and spectral coexistence in a spectrally crowded environment. As exact performance expressions for the detector is not analytically tractable in this case, an information theoretic approach is taken into account to design the transmission codes. Simulation results illustrate the efficiency of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Worst-case jamming signal design and avoidance for MIMO radars.\n \n \n \n \n\n\n \n Aittomäki, T.; and Koivunen, V.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2220-2224, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Worst-casePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760643,\n  author = {T. Aittomäki and V. Koivunen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Worst-case jamming signal design and avoidance for MIMO radars},\n  year = {2016},\n  pages = {2220-2224},\n  abstract = {We optimize the jamming signal for disrupting the operation of a MIMO radar system in order to understand the threat jamming poses to such systems. The jamming signal optimization is formulated as a minimax problem minimizing the maximum SINR that the receivers can achieve, resulting in a semidefinite program for a Toeplitz jamming covariance matrix or a second-order cone program for a circulant approximation. In the simplest case of optimizing the average SINR of a single receiver, a waterfilling-type solution is obtained. Numerical studies suggest that distributed radar systems with waveform agility and mismatched filtering capabilities are resilient against jamming.},\n  keywords = {covariance matrices;filtering theory;jamming;MIMO radar;minimax techniques;radar signal processing;Toeplitz matrices;worst-case jamming signal design and avoidance;MIMO radar system;jamming signal optimization;minimax problem;semidefinite program;Toeplitz jamming covariance matrix;second-order cone program;circulant approximation;waterfilling-type solution;distributed radar system;Jamming;Receivers;Covariance matrices;Interference;Signal to noise ratio;Radar;Signal design;MIMO radar;jamming;convex optimization;waterfilling;anti-jamming;interference mitigation;mismatched filtering},\n  doi = {10.1109/EUSIPCO.2016.7760643},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255925.pdf},\n}\n\n
\n
\n\n\n
\n We optimize the jamming signal for disrupting the operation of a MIMO radar system in order to understand the threat jamming poses to such systems. The jamming signal optimization is formulated as a minimax problem minimizing the maximum SINR that the receivers can achieve, resulting in a semidefinite program for a Toeplitz jamming covariance matrix or a second-order cone program for a circulant approximation. In the simplest case of optimizing the average SINR of a single receiver, a waterfilling-type solution is obtained. Numerical studies suggest that distributed radar systems with waveform agility and mismatched filtering capabilities are resilient against jamming.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Reliability problems and Pareto-optimality in cognitive radar (Invited paper).\n \n \n \n\n\n \n Soltanalian, M.; Mysore, R. B. S.; and Ottersten, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2225-2229, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760644,\n  author = {M. Soltanalian and R. B. S. Mysore and B. Ottersten},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Reliability problems and Pareto-optimality in cognitive radar (Invited paper)},\n  year = {2016},\n  pages = {2225-2229},\n  abstract = {Cognitive radar refers to an adaptive sensing system exhibiting high degree of waveform adaptivity and diversity enabled by intelligent processing and exploitation of information from the environment. The next generation of radar systems are characterized by their application to scenarios exhibiting non-stationary scenes as well as interference caused by use of shared spectrum. Cognitive radar systems, by their inherent adaptivity, seem to be the natural choice for such applications. However, adaptivity opens up reliability issues due to uncertainties induced in the information gathering and processing. This paper lists some of the reliability aspects foreseen for cognitive radar systems and motivates the need for waveform designs satisfying different metrics simultaneously towards enhancing the reliability. An iterative framework based on multi-objective optimization is proposed to provide Pareto-optimal waveform designs.},\n  keywords = {Pareto optimisation;radar signal processing;telecommunication network reliability;adaptive sensing system;waveform adaptivity;non-stationary scenes;shared spectrum;cognitive radar systems;reliability aspects;multi-objective optimization;Pareto-optimal waveform designs;Measurement;Interference;Reliability engineering;Signal to noise ratio;Optimization;Cognitive radar;reliability;waveform diversity;waveform optimization;multi-objective optimization;Pareto-optimal design},\n  doi = {10.1109/EUSIPCO.2016.7760644},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n Cognitive radar refers to an adaptive sensing system exhibiting high degree of waveform adaptivity and diversity enabled by intelligent processing and exploitation of information from the environment. The next generation of radar systems are characterized by their application to scenarios exhibiting non-stationary scenes as well as interference caused by use of shared spectrum. Cognitive radar systems, by their inherent adaptivity, seem to be the natural choice for such applications. However, adaptivity opens up reliability issues due to uncertainties induced in the information gathering and processing. This paper lists some of the reliability aspects foreseen for cognitive radar systems and motivates the need for waveform designs satisfying different metrics simultaneously towards enhancing the reliability. An iterative framework based on multi-objective optimization is proposed to provide Pareto-optimal waveform designs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A novel narrowband interference suppression method for OFDM radar.\n \n \n \n \n\n\n \n Hakobyan, G.; and Yang, B.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2230-2234, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760645,\n  author = {G. Hakobyan and B. Yang},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A novel narrowband interference suppression method for OFDM radar},\n  year = {2016},\n  pages = {2230-2234},\n  abstract = {The interference between automotive radar sensors becomes a major issue with the increasing number of radars integrated in vehicles for comfort and safety functions. The state-of-the-art radars, typically operating with frequency modulated continuous wave (FMCW) modulation, can be regarded as narrowband interferers for a high bandwidth orthogonal frequency division multiplexing (OFDM) radar with comparably short duration of OFDM symbols. In this paper, we analyze the impact of such interferers on OFDM radar, give a signal model of OFDM radar in presence of interference and discuss the effect of signal processing steps on the latter. Additionally, we present an interference suppression algorithm suitable for any type of narrowband interference. We show in simulations that a considerable interference suppression can be achieved and verify the presented algorithm with measurements.},\n  keywords = {CW radar;FM radar;interference suppression;OFDM modulation;radar interference;radar signal processing;road vehicle radar;narrowband interference suppression method;OFDM radar;automotive radar sensors;safety functions;comfort functions;frequency modulated continuous wave modulation;FMCW modulation;orthogonal frequency division multiplexing radar;OFDM symbols;signal model;signal processing;OFDM;Radar;Interference suppression;Signal processing;Signal processing algorithms;Narrowband;OFDM radar;interference;linear prediction},\n  doi = {10.1109/EUSIPCO.2016.7760645},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256357.pdf},\n}\n\n
\n
\n\n\n
\n The interference between automotive radar sensors becomes a major issue with the increasing number of radars integrated in vehicles for comfort and safety functions. The state-of-the-art radars, typically operating with frequency modulated continuous wave (FMCW) modulation, can be regarded as narrowband interferers for a high bandwidth orthogonal frequency division multiplexing (OFDM) radar with comparably short duration of OFDM symbols. In this paper, we analyze the impact of such interferers on OFDM radar, give a signal model of OFDM radar in presence of interference and discuss the effect of signal processing steps on the latter. Additionally, we present an interference suppression algorithm suitable for any type of narrowband interference. We show in simulations that a considerable interference suppression can be achieved and verify the presented algorithm with measurements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Design of multiple unimodular waveforms with low auto- and cross-correlations for radar via majorization-minimization.\n \n \n \n \n\n\n \n Li, Y.; Vorobyov, S. A.; and He, Z.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2235-2239, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DesignPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760646,\n  author = {Y. Li and S. A. Vorobyov and Z. He},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Design of multiple unimodular waveforms with low auto- and cross-correlations for radar via majorization-minimization},\n  year = {2016},\n  pages = {2235-2239},\n  abstract = {We develop a new efficient method for designing unimodular waveforms with good auto- and cross-correlation properties for multiple-input multiple-output (MIMO) radar. Our waveform design scheme is conducted based on minimization of the integrated sidelobe level of designed waveforms, which is formulated as a quartic non-convex optimization problem. We start from simplifying the quartic optimization problem and then transform it into a quadratic form. By means of the majorization-minimization technique that seeks to find the solution of a corresponding quadratic optimization problem, we resolve the design of waveforms for MIMO radar. Corresponding algorithms that enable good correlations of the designed waveforms and meanwhile show faster convergence as compared to their counterparts are proposed and then tested.},\n  keywords = {concave programming;correlation methods;MIMO radar;minimisation;quadratic programming;waveform analysis;multiple unimodular waveform design;auto-correlations;cross-correlations;multiple-input multiple-output radar;MIMO radar;integrated sidelobe level minimization;quartic nonconvex optimization;majorization-minimization technique;quadratic optimization problem;Optimization;Minimization;MIMO radar;Correlation;Linear programming;Signal processing},\n  doi = {10.1109/EUSIPCO.2016.7760646},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256563.pdf},\n}\n\n
\n
\n\n\n
\n We develop a new efficient method for designing unimodular waveforms with good auto- and cross-correlation properties for multiple-input multiple-output (MIMO) radar. Our waveform design scheme is conducted based on minimization of the integrated sidelobe level of designed waveforms, which is formulated as a quartic non-convex optimization problem. We start from simplifying the quartic optimization problem and then transform it into a quadratic form. By means of the majorization-minimization technique that seeks to find the solution of a corresponding quadratic optimization problem, we resolve the design of waveforms for MIMO radar. Corresponding algorithms that enable good correlations of the designed waveforms and meanwhile show faster convergence as compared to their counterparts are proposed and then tested.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Face photo-sketch recognition using local and global texture descriptors.\n \n \n \n \n\n\n \n Galea, C.; and Farrugia, R. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2240-2244, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"FacePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760647,\n  author = {C. Galea and R. A. Farrugia},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Face photo-sketch recognition using local and global texture descriptors},\n  year = {2016},\n  pages = {2240-2244},\n  abstract = {The automated matching of mug-shot photographs with sketches drawn using eyewitness descriptions of criminals is a problem that has received much attention in recent years. However, most algorithms have been evaluated either on small datasets or using sketches that closely resemble the corresponding photos. In this paper, a method which extracts Multi-scale Local Binary Pattern (MLBP) descriptors from overlapping patches of log-Gabor-filtered images is used to obtain cross-modality templates for each photo and sketch. The Spearman Rank-Order Correlation Coefficient (SROCC) is then used for template matching. Log-Gabor filtering and MLBP provide global and local texture information, respectively, whose combination is shown to be beneficial for face photo-sketch recognition. Experimental results with a large database show that the proposed approach outperforms state-of-the-art methods, with a Rank-1 retrieval rate of 81.4%. Fusion with the intra-modality approach Eigenpatches improves the Rank-1 rate to 85.5%.},\n  keywords = {face recognition;Gabor filters;image filtering;image matching;image texture;face photo-sketch recognition;global texture descriptors;local texture descriptors;Eigenpatches;template matching;SROCC;spearman rank-order correlation coefficient;cross-modality templates;log-Gabor-filtered images;MLBP descriptors;extracts multiscale local binary pattern;Face;Feature extraction;Face recognition;Data mining;Probes;Signal processing algorithms;Correlation;face recognition;inter-modality;log-Gabor filter;Spearman correlation;hand-drawn sketches},\n  doi = {10.1109/EUSIPCO.2016.7760647},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251235.pdf},\n}\n\n
\n
\n\n\n
\n The automated matching of mug-shot photographs with sketches drawn using eyewitness descriptions of criminals is a problem that has received much attention in recent years. However, most algorithms have been evaluated either on small datasets or using sketches that closely resemble the corresponding photos. In this paper, a method which extracts Multi-scale Local Binary Pattern (MLBP) descriptors from overlapping patches of log-Gabor-filtered images is used to obtain cross-modality templates for each photo and sketch. The Spearman Rank-Order Correlation Coefficient (SROCC) is then used for template matching. Log-Gabor filtering and MLBP provide global and local texture information, respectively, whose combination is shown to be beneficial for face photo-sketch recognition. Experimental results with a large database show that the proposed approach outperforms state-of-the-art methods, with a Rank-1 retrieval rate of 81.4%. Fusion with the intra-modality approach Eigenpatches improves the Rank-1 rate to 85.5%.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection of double AVC/HEVC encoding.\n \n \n \n \n\n\n \n Costanzo, A.; and Barni, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2245-2249, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DetectionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760648,\n  author = {A. Costanzo and M. Barni},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Detection of double AVC/HEVC encoding},\n  year = {2016},\n  pages = {2245-2249},\n  abstract = {New generation video codecs are designed to improve coding efficiency with respect to previous standards and to support the latest hardware and applications. High Efficiency Video Coding (HEVC) is the successor of Advanced Video Coding (AVC), which is by far the most adopted standard worldwide. To promote the new standard, producers are re-releasing recent movies in HEVC format. In such a scenario, a fraudulent provider that does not own the original uncompressed data could sell old, lower quality AVC content re-encoded as if it were natively HEVC. Furthermore, with several hundred hours of video content uploaded every minute, it is not unlikely that re-edited low quality clips are labelled as HEVC to increase popularity and revenues from advertising. We tackle with these and similar issues by proposing a forensic technique to detect whether a HEVC sequence was obtained from an uncompressed sequence or by re-encoding an existing AVC sequence.},\n  keywords = {image sequences;video codecs;video coding;video codecs;high efficiency video coding;advanced video coding;double AVC-HEVC encoding;video content;forensic technique;HEVC sequence;AVC sequence;Encoding;Copper;Standards;Video sequences;Quantization (signal);Multimedia Forensics;Video Forensics;Video re-encoding;HEVC;AVC},\n  doi = {10.1109/EUSIPCO.2016.7760648},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251620.pdf},\n}\n\n
\n
\n\n\n
\n New generation video codecs are designed to improve coding efficiency with respect to previous standards and to support the latest hardware and applications. High Efficiency Video Coding (HEVC) is the successor of Advanced Video Coding (AVC), which is by far the most adopted standard worldwide. To promote the new standard, producers are re-releasing recent movies in HEVC format. In such a scenario, a fraudulent provider that does not own the original uncompressed data could sell old, lower quality AVC content re-encoded as if it were natively HEVC. Furthermore, with several hundred hours of video content uploaded every minute, it is not unlikely that re-edited low quality clips are labelled as HEVC to increase popularity and revenues from advertising. We tackle with these and similar issues by proposing a forensic technique to detect whether a HEVC sequence was obtained from an uncompressed sequence or by re-encoding an existing AVC sequence.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A non-speech audio CAPTCHA based on acoustic event detection and classification.\n \n \n \n \n\n\n \n Meutzner, H.; and Kolossa, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2250-2254, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760649,\n  author = {H. Meutzner and D. Kolossa},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A non-speech audio CAPTCHA based on acoustic event detection and classification},\n  year = {2016},\n  pages = {2250-2254},\n  abstract = {The completely automated public Turing test to tell computers and humans apart (CAPTCHA) represents an established method to prevent automated abuse of web services. Most websites provide an audio CAPTCHA - in addition to a conventional visual scheme - to facilitate access for a wider range of users. These audio CAPTCHAs are generally based on distorted speech, rendering the task difficult for untrained or non-native listeners, while still being vulnerable against attacks that make use of automatic speech recognition techniques. In this work, we propose a novel and universally usable type of audio CAPTCHA that is solely based on the classification of acoustic sound events. We show that the proposed CAPTCHA leads to satisfactorily high human success rates, while being robust against recently proposed attacks, more than currently available speech-based CAPTCHAs.},\n  keywords = {computer network security;speech recognition;Web services;Web sites;human success rates;acoustic sound events;automatic speech recognition;nonnative listeners;untrained listeners;distorted speech;visual scheme;Web sites;Web services;automated public Turing test;acoustic event classification;acoustic event detection;nonspeech audio CAPTCHA;CAPTCHAs;Acoustics;Acoustic distortion;Databases;Nonlinear distortion;Speech},\n  doi = {10.1109/EUSIPCO.2016.7760649},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251627.pdf},\n}\n\n
\n
\n\n\n
\n The completely automated public Turing test to tell computers and humans apart (CAPTCHA) represents an established method to prevent automated abuse of web services. Most websites provide an audio CAPTCHA - in addition to a conventional visual scheme - to facilitate access for a wider range of users. These audio CAPTCHAs are generally based on distorted speech, rendering the task difficult for untrained or non-native listeners, while still being vulnerable against attacks that make use of automatic speech recognition techniques. In this work, we propose a novel and universally usable type of audio CAPTCHA that is solely based on the classification of acoustic sound events. We show that the proposed CAPTCHA leads to satisfactorily high human success rates, while being robust against recently proposed attacks, more than currently available speech-based CAPTCHAs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Video alignment for phylogenetic analysis.\n \n \n \n \n\n\n \n Lameri, S.; Bestagini, P.; and Tubaro, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2255-2259, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"VideoPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760650,\n  author = {S. Lameri and P. Bestagini and S. Tubaro},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Video alignment for phylogenetic analysis},\n  year = {2016},\n  pages = {2255-2259},\n  abstract = {The possibility of studying multiple objects at once for forensic analysis has paved the way to the development of multimedia phylogeny algorithms. Concerning video phylogeny, a fundamental step at the base of many applications is multiple video alignment. This is, given a pool of near-duplicate video sequences partially overlapping in the temporal domain, find the relative time delay between all of them. As phylogeny methods typically takes into account huge quantities of data, the used alignment algorithms must be computationally efficient. In this paper, we propose a solution for multiple video alignment based on the minimisation of a least-square cost function. The proposed solution can be computed in closed form with reduced computational complexity. Moreover, we propose two possible solutions for refining the estimated alignment based on the removal of outlier measurements.},\n  keywords = {computational complexity;delays;digital forensics;image sequences;least squares approximations;video signal processing;outlier measurement;computational complexity;least-square cost function;time delay;temporal domain;near-duplicate video sequence;video phylogeny;multimedia phylogeny algorithm;forensic analysis;phylogenetic analysis;video alignment;Phylogeny;Video sequences;Estimation;Signal processing algorithms;Delays;Forensics;Algorithm design and analysis},\n  doi = {10.1109/EUSIPCO.2016.7760650},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251973.pdf},\n}\n\n
\n
\n\n\n
\n The possibility of studying multiple objects at once for forensic analysis has paved the way to the development of multimedia phylogeny algorithms. Concerning video phylogeny, a fundamental step at the base of many applications is multiple video alignment. This is, given a pool of near-duplicate video sequences partially overlapping in the temporal domain, find the relative time delay between all of them. As phylogeny methods typically takes into account huge quantities of data, the used alignment algorithms must be computationally efficient. In this paper, we propose a solution for multiple video alignment based on the minimisation of a least-square cost function. The proposed solution can be computed in closed form with reduced computational complexity. Moreover, we propose two possible solutions for refining the estimated alignment based on the removal of outlier measurements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Botnet identification in randomized DDoS attacks.\n \n \n \n \n\n\n \n Matta, V.; Di Mauro, M.; and Longo, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2260-2264, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"BotnetPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760651,\n  author = {V. Matta and M. {Di Mauro} and M. Longo},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Botnet identification in randomized DDoS attacks},\n  year = {2016},\n  pages = {2260-2264},\n  abstract = {Recent variants of Distributed Denial-of-Service (DDoS) attacks leverage the flexibility of application-layer protocols to disguise malicious activities as normal traffic patterns, while concurrently overwhelming the target destination with a large request rate. New countermeasures are necessary, aimed at guaranteeing an early and reliable identification of the compromised network nodes (the botnet). In this work we introduce a formal model for the aforementioned class of attacks, and we devise an inference algorithm that estimates the botnet hidden in the network, converging to the true solution as time progresses. Notably, the analysis is validated over real network traces.},\n  keywords = {computer network security;inference mechanisms;invasive software;protocols;botnet identification;randomized DDoS attack;distributed Denial-of-Service attack;application-layer protocol;traffic pattern;formal model;inference algorithm;Dictionaries;Computer crime;Emulation;Technological innovation;Signal processing;Signal processing algorithms;Aggregates;Distributed Denial-of-Service;DDoS;Cyber-Security;Signal Processing for Network Security},\n  doi = {10.1109/EUSIPCO.2016.7760651},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252164.pdf},\n}\n\n
\n
\n\n\n
\n Recent variants of Distributed Denial-of-Service (DDoS) attacks leverage the flexibility of application-layer protocols to disguise malicious activities as normal traffic patterns, while concurrently overwhelming the target destination with a large request rate. New countermeasures are necessary, aimed at guaranteeing an early and reliable identification of the compromised network nodes (the botnet). In this work we introduce a formal model for the aforementioned class of attacks, and we devise an inference algorithm that estimates the botnet hidden in the network, converging to the true solution as time progresses. Notably, the analysis is validated over real network traces.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Unpredictability assessment of biometric hashing under naive and advanced threat conditions.\n \n \n \n \n\n\n \n Topcu, B.; Karabat, C.; and Erdogan, H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2265-2269, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"UnpredictabilityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760652,\n  author = {B. Topcu and C. Karabat and H. Erdogan},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Unpredictability assessment of biometric hashing under naive and advanced threat conditions},\n  year = {2016},\n  pages = {2265-2269},\n  abstract = {Recent years have witnessed the use of biometric recognition systems in increasing number of applications with the number of users growing at a steady pace. However, security and privacy problems have arisen from this upsurge of interest to biometric systems. Template protection methods solve such security and privacy problems where unpredictability is a crucial goal. Here, we study the unpredictability of biohashing (a transformation-based template protection method) using entropy as a measure. Our novel work outlines a systematic approach for theoretical evaluation of biohashes using estimated entropy which is based on degree of freedom of Binomial distribution. Our experiments demonstrate that biohash unpredictability varies in different threat models where the entropy of a biohash is almost equal to its bit length under the naive scenario and is significantly low in the advanced scenario, implying that the amount of information kept hidden in a biohash is more likely to be predicted.},\n  keywords = {binomial distribution;biometrics (access control);cryptography;data protection;entropy;estimation theory;unpredictability assessment;biometric hashing;threat condition;template protection;security problem;privacy problem;entropy estimation;binomial distribution;Entropy;Face;Iris recognition;Authentication;Databases;Principal component analysis},\n  doi = {10.1109/EUSIPCO.2016.7760652},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252133.pdf},\n}\n\n
\n
\n\n\n
\n Recent years have witnessed the use of biometric recognition systems in increasing number of applications with the number of users growing at a steady pace. However, security and privacy problems have arisen from this upsurge of interest to biometric systems. Template protection methods solve such security and privacy problems where unpredictability is a crucial goal. Here, we study the unpredictability of biohashing (a transformation-based template protection method) using entropy as a measure. Our novel work outlines a systematic approach for theoretical evaluation of biohashes using estimated entropy which is based on degree of freedom of Binomial distribution. Our experiments demonstrate that biohash unpredictability varies in different threat models where the entropy of a biohash is almost equal to its bit length under the naive scenario and is significantly low in the advanced scenario, implying that the amount of information kept hidden in a biohash is more likely to be predicted.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Maximum margin binary classifiers using intrinsic and penalty graphs.\n \n \n \n \n\n\n \n Kicanaoglu, B.; Iosifidis, A.; and Gabbouj, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2270-2274, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MaximumPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760653,\n  author = {B. Kicanaoglu and A. Iosifidis and M. Gabbouj},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Maximum margin binary classifiers using intrinsic and penalty graphs},\n  year = {2016},\n  pages = {2270-2274},\n  abstract = {In this paper a variant of the binary Support Vector Machine classifier that exploits intrinsic and penalty graphs in its optimization problem is proposed. We show that the proposed approach is equivalent to a two-step process where the data is firstly mapped to an optimal discriminant space of the input space and, subsequently, the original SVM classifier is applied. Our approach exploits the underlying data distribution in a discriminant space in order to enhance SVMs generalization ability. We also extend this idea to the Least Squares SVM classifier, where the adoption of the intrinsic and penalty graphs acts as a regularizer incorporating discriminant information in the overall solution. Experiments on standard and recently introduced datasets verify our analysis since, in the cases where the classes forming the problem are not well discriminated in the original feature space, the exploitation of both intrinsic and penalty graphs enhances performance.},\n  keywords = {feature extraction;graph theory;least squares approximations;optimisation;support vector machines;maximum margin binary classifiers;feature space;least square SVM classifier;SVM generalization ability;SVM classifier;optimal discriminant space;two-step process;optimization problem;binary support vector machine classifier;penalty graph;intrinsic graph;Support vector machines;Optimization;Matrix decomposition;Standards;Eigenvalues and eigenfunctions;Signal processing;Laplace equations},\n  doi = {10.1109/EUSIPCO.2016.7760653},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570244295.pdf},\n}\n\n
\n
\n\n\n
\n In this paper a variant of the binary Support Vector Machine classifier that exploits intrinsic and penalty graphs in its optimization problem is proposed. We show that the proposed approach is equivalent to a two-step process where the data is firstly mapped to an optimal discriminant space of the input space and, subsequently, the original SVM classifier is applied. Our approach exploits the underlying data distribution in a discriminant space in order to enhance SVMs generalization ability. We also extend this idea to the Least Squares SVM classifier, where the adoption of the intrinsic and penalty graphs acts as a regularizer incorporating discriminant information in the overall solution. Experiments on standard and recently introduced datasets verify our analysis since, in the cases where the classes forming the problem are not well discriminated in the original feature space, the exploitation of both intrinsic and penalty graphs enhances performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online churn detection on high dimensional cellular data using adaptive hierarchical trees.\n \n \n \n \n\n\n \n Khan, F.; Delibalta, I.; and Kozat, S. S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2275-2279, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnlinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760654,\n  author = {F. Khan and I. Delibalta and S. S. Kozat},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Online churn detection on high dimensional cellular data using adaptive hierarchical trees},\n  year = {2016},\n  pages = {2275-2279},\n  abstract = {We study online sequential logistic regression for churn detection in cellular networks when the feature vectors lie in a high dimensional space on a time varying manifold. We escape the curse of dimensionality by tracking the subspace of the underlying manifold using a hierarchical tree structure. We use the projections of the original high dimensional feature space onto the underlying manifold as the modified feature vectors. By using the proposed algorithm, we provide significant classification performance with significantly reduced computational complexity as well as memory requirement. We reduce the computational complexity to the order of the depth of the tree and the memory requirement to only linear in the intrinsic dimension of the manifold. We provide several results with real life cellular network data for churn detection.},\n  keywords = {cellular radio;computational complexity;pattern classification;regression analysis;trees (mathematics);memory requirement reduction;computational complexity reduction;classification performance;adaptive hierarchical tree structure;feature vector;cellular network;online sequential logistic regression;high dimensional cellular data online churn detection;Signal processing algorithms;Manifolds;Logistics;Partitioning algorithms;Regression tree analysis;Context;Cellular networks;Churn;big data;online learning;classification on high dimensional manifolds;tree based method},\n  doi = {10.1109/EUSIPCO.2016.7760654},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570245682.pdf},\n}\n\n
\n
\n\n\n
\n We study online sequential logistic regression for churn detection in cellular networks when the feature vectors lie in a high dimensional space on a time varying manifold. We escape the curse of dimensionality by tracking the subspace of the underlying manifold using a hierarchical tree structure. We use the projections of the original high dimensional feature space onto the underlying manifold as the modified feature vectors. By using the proposed algorithm, we provide significant classification performance with significantly reduced computational complexity as well as memory requirement. We reduce the computational complexity to the order of the depth of the tree and the memory requirement to only linear in the intrinsic dimension of the manifold. We provide several results with real life cellular network data for churn detection.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A unified Bayesian model of time-frequency clustering and low-rank approximation for multi-channel source separation.\n \n \n \n \n\n\n \n Itakura, K.; Bando, Y.; Nakamura, E.; Itoyama, K.; and Yoshii, K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2280-2284, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760655,\n  author = {K. Itakura and Y. Bando and E. Nakamura and K. Itoyama and K. Yoshii},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A unified Bayesian model of time-frequency clustering and low-rank approximation for multi-channel source separation},\n  year = {2016},\n  pages = {2280-2284},\n  abstract = {This paper presents a statistical method of multichannel source separation, called NMF-LDA, that unifies nonnegative matrix factorization (NMF) and latent Dirichlet allocation (LDA) in a hierarchical Bayesian manner. If the frequency components of sources are sparsely distributed, the source spectrograms can be considered to be disjoint with each other in most time-frequency bins. Under this assumption, LDA has been used for clustering time-frequency bins into individual sources using spatial information. A way to improve LDA-based source separation is to consider the empirical fact that source spectrograms tend to have low-rank structure. To leverage both the sparseness and low-rankness of source spectrograms, our method iterates an LDA-step (hard clustering of time-frequency bins) that gives deficient source spectrograms and an NMF-step (low-rank matrix approximation) that completes the deficient bins of those spectrograms. Experimental results showed the proposed method outperformed conventional methods.},\n  keywords = {approximation theory;Bayes methods;matrix decomposition;source separation;low-rank matrix approximation;hard clustering;low-rank structure;spatial information;time-frequency bins;source spectrograms;hierarchical Bayesian manner;latent Dirichlet allocation;nonnegative matrix factorization;statistical method;low-rank approximation;time-frequency clustering;unified Bayesian model;multichannel source separation;Spectrogram;Time-frequency analysis;Source separation;Correlation;Bayes methods;Matrix decomposition},\n  doi = {10.1109/EUSIPCO.2016.7760655},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250749.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a statistical method of multichannel source separation, called NMF-LDA, that unifies nonnegative matrix factorization (NMF) and latent Dirichlet allocation (LDA) in a hierarchical Bayesian manner. If the frequency components of sources are sparsely distributed, the source spectrograms can be considered to be disjoint with each other in most time-frequency bins. Under this assumption, LDA has been used for clustering time-frequency bins into individual sources using spatial information. A way to improve LDA-based source separation is to consider the empirical fact that source spectrograms tend to have low-rank structure. To leverage both the sparseness and low-rankness of source spectrograms, our method iterates an LDA-step (hard clustering of time-frequency bins) that gives deficient source spectrograms and an NMF-step (low-rank matrix approximation) that completes the deficient bins of those spectrograms. Experimental results showed the proposed method outperformed conventional methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint tensor compression for coupled canonical polyadic decompositions.\n \n \n \n \n\n\n \n Cohen, J. E.; Farias, R. C.; and Comon, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2285-2289, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760656,\n  author = {J. E. Cohen and R. C. Farias and P. Comon},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint tensor compression for coupled canonical polyadic decompositions},\n  year = {2016},\n  pages = {2285-2289},\n  abstract = {To deal with large multimodal datasets, coupled canonical polyadic decompositions are used as an approximation model. In this paper, a joint compression scheme is introduced to reduce the dimensions of the dataset. Joint compression allows to solve the approximation problem in a compressed domain using standard coupled decomposition algorithms. Computational complexity required to obtain the coupled decomposition is therefore reduced. Also, we propose to approximate the update of the coupled factor by a simple weighted average of the independent updates of the coupled factors. The proposed approach and its simplified version are tested with synthetic data and we show that both do not incur substantial loss in approximation performance.},\n  keywords = {approximation theory;computational complexity;data compression;tensors;joint tensor compression;multimodal datasets;coupled canonical polyadic decompositions;approximation model;dimension reduction;standard coupled decomposition algorithms;computational complexity;Tensile stress;Couplings;Matrices;Data models;Standards;Noise measurement;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760656},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251605.pdf},\n}\n\n
\n
\n\n\n
\n To deal with large multimodal datasets, coupled canonical polyadic decompositions are used as an approximation model. In this paper, a joint compression scheme is introduced to reduce the dimensions of the dataset. Joint compression allows to solve the approximation problem in a compressed domain using standard coupled decomposition algorithms. Computational complexity required to obtain the coupled decomposition is therefore reduced. Also, we propose to approximate the update of the coupled factor by a simple weighted average of the independent updates of the coupled factors. The proposed approach and its simplified version are tested with synthetic data and we show that both do not incur substantial loss in approximation performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive hierarchical space partitioning for online classification.\n \n \n \n \n\n\n \n Kilic, O. F.; Denizcan Vanli, N.; Ozkan, H.; Delibalta, I.; and Kozat, S. S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2290-2294, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760657,\n  author = {O. F. Kilic and N. {Denizcan Vanli} and H. Ozkan and I. Delibalta and S. S. Kozat},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive hierarchical space partitioning for online classification},\n  year = {2016},\n  pages = {2290-2294},\n  abstract = {We propose an online algorithm for supervised learning with strong performance guarantees under the empirical zero-one loss. The proposed method adaptively partitions the feature space in a hierarchical manner and generates a powerful finite combination of basic models. This provides algorithm to obtain a strong classification method which enables it to create a linear piecewise classifier model that can work well under highly non-linear complex data. The introduced algorithm also have scalable computational complexity that scales linearly with dimension of the feature space, depth of the partitioning and number of processed data. Through experiments we show that the introduced algorithm outperforms the state-of-the-art ensemble techniques over various well-known machine learning data sets.},\n  keywords = {computational complexity;feature extraction;image classification;learning (artificial intelligence);piecewise linear techniques;feature space dimension;computational complexity;nonlinear complex data;linear piecewise classifier model;feature space partition;empirical zero-one loss;supervised learning;online classification;adaptive hierarchical space partitioning;Particle separators;Partitioning algorithms;Signal processing algorithms;Computational modeling;Europe;Signal processing;Adaptation models},\n  doi = {10.1109/EUSIPCO.2016.7760657},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251889.pdf},\n}\n\n
\n
\n\n\n
\n We propose an online algorithm for supervised learning with strong performance guarantees under the empirical zero-one loss. The proposed method adaptively partitions the feature space in a hierarchical manner and generates a powerful finite combination of basic models. This provides algorithm to obtain a strong classification method which enables it to create a linear piecewise classifier model that can work well under highly non-linear complex data. The introduced algorithm also have scalable computational complexity that scales linearly with dimension of the feature space, depth of the partitioning and number of processed data. Through experiments we show that the introduced algorithm outperforms the state-of-the-art ensemble techniques over various well-known machine learning data sets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Tomographic reconstruction using a new voxel-domain prior and Gaussian message passing.\n \n \n \n \n\n\n \n Zalmai, N.; Luneau, C.; Stritt, C.; and Loeliger, H.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2295-2299, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"TomographicPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760658,\n  author = {N. Zalmai and C. Luneau and C. Stritt and H. Loeliger},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Tomographic reconstruction using a new voxel-domain prior and Gaussian message passing},\n  year = {2016},\n  pages = {2295-2299},\n  abstract = {The paper proposes a new prior model for gray-scale images in 2D and 3D, and a pertinent algorithm for tomographic image reconstruction. Using ideas from sparse Bayesian learning, the proposed prior is a Markov random field with individual unknown variances on each edge, which allows for sharp edges. Such a prior model remarkably captures and preserves both the edge structures and continuous regions of natural images while being computationally attractive. The proposed reconstruction algorithm is an efficient EM (expectation maximization) algorithm where the actual computations essentially reduce to scalar Gaussian message passing. Simulation results show that the proposed approach works well even with few projections, and it yields (slightly) better results than a state-of-the art method.},\n  keywords = {Bayes methods;expectation-maximisation algorithm;Gaussian processes;image reconstruction;learning (artificial intelligence);Markov processes;tomography;gray-scale images;Gaussian message passing;voxel-domain prior;tomographic image reconstruction;2D images;3D images;sparse Bayesian learning;Markov random field;edge structures;natural images;EM algorithm;expectation maximization algorithm;Three-dimensional displays;Two dimensional displays;Message passing;Signal processing algorithms;Image reconstruction;Image edge detection;Solid modeling},\n  doi = {10.1109/EUSIPCO.2016.7760658},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251946.pdf},\n}\n\n
\n
\n\n\n
\n The paper proposes a new prior model for gray-scale images in 2D and 3D, and a pertinent algorithm for tomographic image reconstruction. Using ideas from sparse Bayesian learning, the proposed prior is a Markov random field with individual unknown variances on each edge, which allows for sharp edges. Such a prior model remarkably captures and preserves both the edge structures and continuous regions of natural images while being computationally attractive. The proposed reconstruction algorithm is an efficient EM (expectation maximization) algorithm where the actual computations essentially reduce to scalar Gaussian message passing. Simulation results show that the proposed approach works well even with few projections, and it yields (slightly) better results than a state-of-the art method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Speech enhancement for hearing-impaired listeners using deep neural networks with auditory-model based features.\n \n \n \n \n\n\n \n Goehring, T.; Yang, X.; Monaghan, J. J. M.; and Bleeck, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2300-2304, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SpeechPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760659,\n  author = {T. Goehring and X. Yang and J. J. M. Monaghan and S. Bleeck},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Speech enhancement for hearing-impaired listeners using deep neural networks with auditory-model based features},\n  year = {2016},\n  pages = {2300-2304},\n  abstract = {Speech understanding in adverse acoustic environments is still a major problem for users of hearing-instruments. Recent studies on supervised speech segregation show good promise to alleviate this problem by separating speech-dominated from noise-dominated spectro-temporal regions with estimated time-frequency masks. The current study compared a previously proposed feature set to a novel auditory-model based feature set using a common deep neural network based speech enhancement framework. The performance of both feature extraction methods was evaluated with objective measurements and a subjective listening test to measure speech perception scores in terms of intelligibility and quality with 17 hearing-impaired listeners. Significant improvements in speech intelligibility and quality ratings were found for both feature extraction systems. However, the auditory-model based feature set showed superior performance compared to the comparison feature set indicating that auditory-model based processing could provide further improvements for supervised speech segregation systems and their potential applications in hearing instruments.},\n  keywords = {feature extraction;hearing;hearing aids;neural nets;speech enhancement;speech intelligibility;time-frequency analysis;hearing instrument;supervised speech segregation system;speech quality;speech intelligibility;speech perception score measurement;subjective listening test;feature extraction method;speech enhancement framework;time-frequency mask estimation;speech-dominated spectro-temporal region;noise-dominated spectro-temporal region;adverse acoustic environment;auditory-model-based feature set;deep neural network;hearing-impaired listener;Speech;Speech enhancement;Signal to noise ratio;Noise measurement;Feature extraction;Testing;Signal processing algorithms;hearing aids;speech enhancement;deep neural networks;auditory models},\n  doi = {10.1109/EUSIPCO.2016.7760659},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252244.pdf},\n}\n\n
\n
\n\n\n
\n Speech understanding in adverse acoustic environments is still a major problem for users of hearing-instruments. Recent studies on supervised speech segregation show good promise to alleviate this problem by separating speech-dominated from noise-dominated spectro-temporal regions with estimated time-frequency masks. The current study compared a previously proposed feature set to a novel auditory-model based feature set using a common deep neural network based speech enhancement framework. The performance of both feature extraction methods was evaluated with objective measurements and a subjective listening test to measure speech perception scores in terms of intelligibility and quality with 17 hearing-impaired listeners. Significant improvements in speech intelligibility and quality ratings were found for both feature extraction systems. However, the auditory-model based feature set showed superior performance compared to the comparison feature set indicating that auditory-model based processing could provide further improvements for supervised speech segregation systems and their potential applications in hearing instruments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cascade processing for speeding up sliding window sparse classification.\n \n \n \n \n\n\n \n Mahkonen, K.; Hurmalainen, A.; Virtanen, T.; and Kämäräinen, J.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2305-2309, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CascadePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760660,\n  author = {K. Mahkonen and A. Hurmalainen and T. Virtanen and J. Kämäräinen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Cascade processing for speeding up sliding window sparse classification},\n  year = {2016},\n  pages = {2305-2309},\n  abstract = {Sparse representations have been found to provide high classification accuracy in many fields. Their drawback is the high computational load. In this work, we propose a novel cascaded classifier structure to speed up the decision process while utilizing sparse signal representation. In particular, we apply the cascaded decision process for noise robust automatic speech recognition task. The cascaded decision process is implemented using a feedforward neural network (NN) and time sparse versions of a non-negative matrix factorization (NMF) based sparse classification method of [1]. The recognition accuracy of our cascade is among the three best in the recent CHiME2013 benchmark and obtains six times faster the accuracy of NMF alone as in [1].},\n  keywords = {feedforward neural nets;matrix decomposition;signal classification;signal representation;speech recognition;sliding window sparse classification;cascaded classifier structure;sparse signal representation;cascaded decision process;noise robust automatic speech recognition;feedforward neural network;nonnegative matrix factorization based sparse classification;NMF;cascade processing;Artificial neural networks;Signal processing;Grammar;Europe;Automatic speech recognition;Sparse matrices;Speech;Automatic speech recognition;non-negative matrix factorization;cascade classification;cascade processing},\n  doi = {10.1109/EUSIPCO.2016.7760660},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252399.pdf},\n}\n\n
\n
\n\n\n
\n Sparse representations have been found to provide high classification accuracy in many fields. Their drawback is the high computational load. In this work, we propose a novel cascaded classifier structure to speed up the decision process while utilizing sparse signal representation. In particular, we apply the cascaded decision process for noise robust automatic speech recognition task. The cascaded decision process is implemented using a feedforward neural network (NN) and time sparse versions of a non-negative matrix factorization (NMF) based sparse classification method of [1]. The recognition accuracy of our cascade is among the three best in the recent CHiME2013 benchmark and obtains six times faster the accuracy of NMF alone as in [1].\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Noise-robust detection of whispering in telephone calls using deep neural networks.\n \n \n \n \n\n\n \n Diment, A.; Parviainen, M.; Virtanen, T.; Zelov, R.; and Glasman, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2310-2314, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Noise-robustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760661,\n  author = {A. Diment and M. Parviainen and T. Virtanen and R. Zelov and A. Glasman},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Noise-robust detection of whispering in telephone calls using deep neural networks},\n  year = {2016},\n  pages = {2310-2314},\n  abstract = {Detection of whispered speech in the presence of high levels of background noise has applications in fraudulent behaviour recognition. For instance, it can serve as an indicator of possible insider trading. We propose a deep neural network (DNN)-based whispering detection system, which operates on both magnitude and phase features, including the group delay feature from all-pole models (APGD). We show that the APGD feature outperforms the conventional ones. Trained and evaluated on the collected diverse dataset of whispered and normal speech with emulated phone line distortions and significant amounts of added background noise, the proposed system performs with accuracies as high as 91.8%.},\n  keywords = {delays;distortion;neural nets;telecommunication computing;telephone sets;phone line distortions;group delay feature;APGD;all-pole models;DNN-based whispering detection system;background noise;whispered speech detection;deep neural networks;telephone calls;noise-robust detection;Speech;Feature extraction;Noise measurement;Training;Signal processing;Neural networks;Speech recognition},\n  doi = {10.1109/EUSIPCO.2016.7760661},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570253952.pdf},\n}\n\n
\n
\n\n\n
\n Detection of whispered speech in the presence of high levels of background noise has applications in fraudulent behaviour recognition. For instance, it can serve as an indicator of possible insider trading. We propose a deep neural network (DNN)-based whispering detection system, which operates on both magnitude and phase features, including the group delay feature from all-pole models (APGD). We show that the APGD feature outperforms the conventional ones. Trained and evaluated on the collected diverse dataset of whispered and normal speech with emulated phone line distortions and significant amounts of added background noise, the proposed system performs with accuracies as high as 91.8%.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Novel deep autoencoder features for non-intrusive speech quality assessment.\n \n \n \n \n\n\n \n Soni, M. H.; and Patil, H. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2315-2319, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"NovelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760662,\n  author = {M. H. Soni and H. A. Patil},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Novel deep autoencoder features for non-intrusive speech quality assessment},\n  year = {2016},\n  pages = {2315-2319},\n  abstract = {To emulate the human perception in quality assessment, an objective metric or assessment method is required, which is a challenging task. Moreover, assessing the quality of speech without any reference or the ground truth is altogether more difficult. In this paper, we propose a new non-intrusive speech quality assessment metric for objective evaluation of speech quality. The originality of proposed scheme lies in using deep autoencoder to extract low-dimensional features from a spectrum of the speech signal and finds a mapping between features and subjective scores using an artificial neural network (ANN). We have shown that autoencoder features capture noise information in a better way than state-of-the-art Filterbank Energies (FBEs). Quantification of our experimental results suggests that proposed metric gives more accurate and correlated scores than an existing benchmark for objective, non-intrusive quality assessment metric ITU-T P.563 standard.},\n  keywords = {feature extraction;neural nets;speech coding;deep autoencoder features;nonintrusive speech quality assessment;human perception;feature extraction;speech signal;artificial neural network;Speech;Quality assessment;Feature extraction;Noise measurement;Signal processing algorithms;Speech processing;Neural networks},\n  doi = {10.1109/EUSIPCO.2016.7760662},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256007.pdf},\n}\n\n
\n
\n\n\n
\n To emulate the human perception in quality assessment, an objective metric or assessment method is required, which is a challenging task. Moreover, assessing the quality of speech without any reference or the ground truth is altogether more difficult. In this paper, we propose a new non-intrusive speech quality assessment metric for objective evaluation of speech quality. The originality of proposed scheme lies in using deep autoencoder to extract low-dimensional features from a spectrum of the speech signal and finds a mapping between features and subjective scores using an artificial neural network (ANN). We have shown that autoencoder features capture noise information in a better way than state-of-the-art Filterbank Energies (FBEs). Quantification of our experimental results suggests that proposed metric gives more accurate and correlated scores than an existing benchmark for objective, non-intrusive quality assessment metric ITU-T P.563 standard.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Acoustic feature prediction from semantic features for expressive speech using deep neural networks.\n \n \n \n \n\n\n \n Jauk, I.; Bonafonte, A.; and Pascual, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2320-2324, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AcousticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760663,\n  author = {I. Jauk and A. Bonafonte and S. Pascual},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Acoustic feature prediction from semantic features for expressive speech using deep neural networks},\n  year = {2016},\n  pages = {2320-2324},\n  abstract = {The goal of the study is to predict acoustic features of expressive speech from semantic vector space representations. Though a lot of successful work was invested in expressiveness analysis and prediction, the results often involve manual labeling, or indirect prediction evaluation such as speech synthesis. The proposed analysis aims at direct acoustic feature prediction and comparison to original acoustic features from an audiobook. The audiobook is mapped in a semantic vector space. A set of acoustic features is extracted from the same utterances, involving iVectors trained on MFCC and F0 basis. Two regression models are trained with the semantic coordinates, DNNs and a baseline CART. Later, semantic and acoustic context features are combined for the prediction. The prediction is achieved successfully using the DNNs. A closer analysis shows that the prediction works best for larger utterances or utterances with specific contexts, and worst for general short utterances and proper names.},\n  keywords = {feature extraction;neural nets;speech processing;DNN;F0 basis;MFCC basis;iVectors;semantic vector space;audiobook;direct acoustic feature prediction;indirect prediction evaluation;manual labeling;expressiveness analysis;semantic vector space representation;expressive speech;semantic feature;deep neural network;Acoustics;Semantics;Feature extraction;Context;Pragmatics;Speech;Databases},\n  doi = {10.1109/EUSIPCO.2016.7760663},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256345.pdf},\n}\n\n
\n
\n\n\n
\n The goal of the study is to predict acoustic features of expressive speech from semantic vector space representations. Though a lot of successful work was invested in expressiveness analysis and prediction, the results often involve manual labeling, or indirect prediction evaluation such as speech synthesis. The proposed analysis aims at direct acoustic feature prediction and comparison to original acoustic features from an audiobook. The audiobook is mapped in a semantic vector space. A set of acoustic features is extracted from the same utterances, involving iVectors trained on MFCC and F0 basis. Two regression models are trained with the semantic coordinates, DNNs and a baseline CART. Later, semantic and acoustic context features are combined for the prediction. The prediction is achieved successfully using the DNNs. A closer analysis shows that the prediction works best for larger utterances or utterances with specific contexts, and worst for general short utterances and proper names.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-output RNN-LSTM for multiple speaker speech synthesis and adaptation.\n \n \n \n \n\n\n \n Pascual, S.; and Bonafonte, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2325-2329, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-outputPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760664,\n  author = {S. Pascual and A. Bonafonte},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-output RNN-LSTM for multiple speaker speech synthesis and adaptation},\n  year = {2016},\n  pages = {2325-2329},\n  abstract = {Deep Learning has been applied successfully to speech processing. In this paper we propose an architecture for speech synthesis using multiple speakers. Some hidden layers are shared by all the speakers, while there is a specific output layer for each speaker. Objective and perceptual experiments prove that this scheme produces much better results in comparison with single speaker model. Moreover, we also tackle the problem of speaker adaptation by adding a new output branch to the model and successfully training it without the need of modifying the base optimized model. This fine tuning method achieves better results than training the new speaker from scratch with its own model.},\n  keywords = {learning (artificial intelligence);recurrent neural nets;speaker recognition;speech synthesis;multiple speaker speech synthesis;multiple speaker speech adaptation;multioutput RNN-LSTM;deep learning;base optimized model;fine tuning method;long short term memory architecture;recurrent neural network;Training;Adaptation models;Acoustics;Computer architecture;Speech;Speech synthesis;Data models},\n  doi = {10.1109/EUSIPCO.2016.7760664},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256358.pdf},\n}\n\n
\n
\n\n\n
\n Deep Learning has been applied successfully to speech processing. In this paper we propose an architecture for speech synthesis using multiple speakers. Some hidden layers are shared by all the speakers, while there is a specific output layer for each speaker. Objective and perceptual experiments prove that this scheme produces much better results in comparison with single speaker model. Moreover, we also tackle the problem of speaker adaptation by adding a new output branch to the model and successfully training it without the need of modifying the base optimized model. This fine tuning method achieves better results than training the new speaker from scratch with its own model.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A minimum variance filter for continuous discrete systems with additive-multiplicative noise.\n \n \n \n\n\n \n Allahyani, S.; and Date, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2330-2334, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760665,\n  author = {S. Allahyani and P. Date},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A minimum variance filter for continuous discrete systems with additive-multiplicative noise},\n  year = {2016},\n  pages = {2330-2334},\n  abstract = {In this paper, we extend the minimum variance filter, which is proposed in the literature for discrete state space systems with multiplicative noise, to continuous-discrete systems with multiplicative noise. The differential equations that describe the process are discretised using the Euler scheme at a higher sampling frequency than the measurement frequency. We test the performance of our new filter i.e. continuous discrete filter (CDF) on simulated numerical examples and compare the results with discrete discrete filter (DDF) which ignores the state behaviour in-between the measurement samples. The results show that the CDF outperforms the DDF in all the cases examined.},\n  keywords = {differential equations;discrete systems;filtering theory;minimum variance filter;continuous discrete systems;additive multiplicative noise;discrete state space systems;differential equations;Euler scheme;sampling frequency;continuous discrete filter;Mathematical model;Time measurement;Covariance matrices;Noise measurement;Bayes methods;Kalman filters;Numerical models},\n  doi = {10.1109/EUSIPCO.2016.7760665},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n In this paper, we extend the minimum variance filter, which is proposed in the literature for discrete state space systems with multiplicative noise, to continuous-discrete systems with multiplicative noise. The differential equations that describe the process are discretised using the Euler scheme at a higher sampling frequency than the measurement frequency. We test the performance of our new filter i.e. continuous discrete filter (CDF) on simulated numerical examples and compare the results with discrete discrete filter (DDF) which ignores the state behaviour in-between the measurement samples. The results show that the CDF outperforms the DDF in all the cases examined.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Recursive functional link polynomial filters: An introduction.\n \n \n \n \n\n\n \n Carini, A.; and Sicuranza, G. L.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2335-2339, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RecursivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760666,\n  author = {A. Carini and G. L. Sicuranza},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Recursive functional link polynomial filters: An introduction},\n  year = {2016},\n  pages = {2335-2339},\n  abstract = {In this paper, we first introduce a novel sub-class of recursive linear-in-the-parameters nonlinear filters, called recursive functional link polynomial filters, which are derived by using the constructive rule of Volterra filters. These filters are universal approximators, according to the Stone-Weierstrass theorem, and offer a remedy to the main drawback of their finite memory counterparts, that is the curse of dimensionality. Since recursive nonlinear filters become, in general, unstable for large input signals, we then consider a simple stabilization procedure by slightly modifying the input-output relationship of recursive functional link polynomial filters. The resulting filters are always stable and, even though no more universal approximators, still offer good modeling performance for nonlinear systems.},\n  keywords = {nonlinear filters;polynomial approximation;recursive filters;stability;recursive functional link polynomial filters;recursive linear-in-the-parameters nonlinear filters;Volterra filters;universal approximators;Stone-Weierstrass theorem;stabilization;nonlinear systems;Legged locomotion;Nonlinear systems;Signal processing algorithms;Europe;Signal processing;Stability criteria;Linear-in-the-parameters nonlinear filters;recursive functional link polynomial filters;universal approximators;bounded-input bounded-output stability},\n  doi = {10.1109/EUSIPCO.2016.7760666},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570250425.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we first introduce a novel sub-class of recursive linear-in-the-parameters nonlinear filters, called recursive functional link polynomial filters, which are derived by using the constructive rule of Volterra filters. These filters are universal approximators, according to the Stone-Weierstrass theorem, and offer a remedy to the main drawback of their finite memory counterparts, that is the curse of dimensionality. Since recursive nonlinear filters become, in general, unstable for large input signals, we then consider a simple stabilization procedure by slightly modifying the input-output relationship of recursive functional link polynomial filters. The resulting filters are always stable and, even though no more universal approximators, still offer good modeling performance for nonlinear systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n An exemplar-based hidden Markov model framework for nonlinear state-space models.\n \n \n \n\n\n \n Lguensat, R.; Fablet, R.; Ailliot, P.; and Tandeo, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2340-2344, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760667,\n  author = {R. Lguensat and R. Fablet and P. Ailliot and P. Tandeo},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {An exemplar-based hidden Markov model framework for nonlinear state-space models},\n  year = {2016},\n  pages = {2340-2344},\n  abstract = {In this work we present a data-driven method for the reconstruction of dynamical systems from noisy and incomplete observation sequences. The key idea is to benefit from the availability of representative datasets of trajectories of the system of interest. These datasets provide an implicit representation of the dynamics of this system, in contrast to the explicit knowledge of the dynamical model. This data-driven strategy is of particular interest in a large variety of situations, e.g., modeling uncertainties and inconsistencies, unknown explicit models, computationally demanding models, etc. We address this exemplar-based reconstruction issue using a Hidden Markov Model (HMM) and we illustrate the relevance of the method for missing data interpolation issues in multivariate time series. As such, our contribution opens new research avenues for a variety of application domains to exploit the wealth of archived observation and simulation data, aiming a better analysis and reconstruction of dynamical systems using past and future observation sequences.},\n  keywords = {hidden Markov models;interpolation;signal classification;signal reconstruction;time series;nonlinear state-space models;exemplar-based hidden Markov model framework;data-driven method;dynamical system reconstruction;noisy observation sequences;incomplete observation sequences;representative datasets;modeling uncertainties;modeling inconsistencies;unknown explicit models;computationally demanding models;exemplar-based reconstruction;data interpolation;multivariate time series;Hidden Markov models;Smoothing methods;Europe;Trajectory;Data models;Kalman filters;Exemplar-based model;Missing data estimation;Hidden Markov Models;Analog method},\n  doi = {10.1109/EUSIPCO.2016.7760667},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n In this work we present a data-driven method for the reconstruction of dynamical systems from noisy and incomplete observation sequences. The key idea is to benefit from the availability of representative datasets of trajectories of the system of interest. These datasets provide an implicit representation of the dynamics of this system, in contrast to the explicit knowledge of the dynamical model. This data-driven strategy is of particular interest in a large variety of situations, e.g., modeling uncertainties and inconsistencies, unknown explicit models, computationally demanding models, etc. We address this exemplar-based reconstruction issue using a Hidden Markov Model (HMM) and we illustrate the relevance of the method for missing data interpolation issues in multivariate time series. As such, our contribution opens new research avenues for a variety of application domains to exploit the wealth of archived observation and simulation data, aiming a better analysis and reconstruction of dynamical systems using past and future observation sequences.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed labelling of audio sources in wireless acoustic sensor networks using consensus and matching.\n \n \n \n \n\n\n \n Bahari, M. H.; Plata-Chaves, J.; Bertrand, A.; and Moonen, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2345-2349, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DistributedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760668,\n  author = {M. H. Bahari and J. Plata-Chaves and A. Bertrand and M. Moonen},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Distributed labelling of audio sources in wireless acoustic sensor networks using consensus and matching},\n  year = {2016},\n  pages = {2345-2349},\n  abstract = {In this paper, we propose a new method for distributed labelling of audio sources in wireless acoustic sensor networks (WASNs). We consider WASNs comprising of nodes equipped with multiple microphones observing signals transmitted by multiple sources. An important step toward a cooperation between the nodes, e.g. for a voice-activity-detection, is a network-wide consensus on the source labelling such that all nodes assign the same unique label to each source. In this paper, a hierarchical approach is applied such that first a network clustering algorithm is performed and then in each sub-network, the energy signatures of the sources are estimated using a non-negative independent component analysis over the energy patterns observed by the different nodes. Finally the source labels are obtained by an iterative consensus and matching algorithm, which compares and matches the energy signatures estimated in different sub-networks. The experimental results show the effectiveness of the proposed method.},\n  keywords = {acoustic communication (telecommunication);audio signal processing;independent component analysis;iterative methods;voice activity detection;wireless sensor networks;distributed labelling;audio sources;wireless acoustic sensor networks;WASN;signal transmitted;voice activity detection;source labelling;network clustering algorithm;nonnegative independent component analysis;iterative consensus algorithm;matching algorithm;Signal processing algorithms;Labeling;Clustering algorithms;Wireless sensor networks;Wireless communication;Microphones;Estimation;Distributed labelling;consensus and matching;wireless acoustic sensor networks;energy signatures;non-negative independent component analysis},\n  doi = {10.1109/EUSIPCO.2016.7760668},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252328.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a new method for distributed labelling of audio sources in wireless acoustic sensor networks (WASNs). We consider WASNs comprising of nodes equipped with multiple microphones observing signals transmitted by multiple sources. An important step toward a cooperation between the nodes, e.g. for a voice-activity-detection, is a network-wide consensus on the source labelling such that all nodes assign the same unique label to each source. In this paper, a hierarchical approach is applied such that first a network clustering algorithm is performed and then in each sub-network, the energy signatures of the sources are estimated using a non-negative independent component analysis over the energy patterns observed by the different nodes. Finally the source labels are obtained by an iterative consensus and matching algorithm, which compares and matches the energy signatures estimated in different sub-networks. The experimental results show the effectiveness of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Decentralizing multi-cell maximum weighted sum rate precoding via large system analysis.\n \n \n \n \n\n\n \n Tabikh, W.; Yuan-Wu, Y.; and Slock, D.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2350-2354, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DecentralizingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760669,\n  author = {W. Tabikh and Y. Yuan-Wu and D. Slock},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Decentralizing multi-cell maximum weighted sum rate precoding via large system analysis},\n  year = {2016},\n  pages = {2350-2354},\n  abstract = {We propose a decentralized algorithm for weighted sum rate (WSR) maximization via large system analysis. The rate maximization problem is done via weighted sum mean-squared error (WSMSE) minimization. Decentralized processing relies on the exchange via a backhaul link of a low amount of information. The inter-cell interference terms couple the maximization problems at the different base stations (BS)s. Large system approximations are used to replace the inter-cell interference terms and to decouple the problems. We demonstrate that the approximates depend only on the slow fading terms or second order statistics of the channels. Then, each BS computes the transmit precoders to serve its own user equipments (UE)s locally. No feedback channels from the UEs to the serving BSs will be required.},\n  keywords = {cellular radio;mean square error methods;optimisation;precoding;multicell maximum weighted sum rate precoding algorithm;large-system analysis;weighted sum rate maximization;WSR maximization;weighted sum mean-squared error minimization;WSMSE minimization;intercell interference;base station;slow-fading terms;second-order statistics;user equipments;Interference;Optimization;System analysis and design;Array signal processing;Europe;Precoding;Large System analysis;coordinated beamforming;decentralization},\n  doi = {10.1109/EUSIPCO.2016.7760669},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570252356.pdf},\n}\n\n
\n
\n\n\n
\n We propose a decentralized algorithm for weighted sum rate (WSR) maximization via large system analysis. The rate maximization problem is done via weighted sum mean-squared error (WSMSE) minimization. Decentralized processing relies on the exchange via a backhaul link of a low amount of information. The inter-cell interference terms couple the maximization problems at the different base stations (BS)s. Large system approximations are used to replace the inter-cell interference terms and to decouple the problems. We demonstrate that the approximates depend only on the slow fading terms or second order statistics of the channels. Then, each BS computes the transmit precoders to serve its own user equipments (UE)s locally. No feedback channels from the UEs to the serving BSs will be required.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A low-cost digital self-interference cancellation structure for full-duplex communications.\n \n \n \n \n\n\n \n Lu, H.; Shao, S.; and Tang, Y.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2355-2359, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760670,\n  author = {H. Lu and S. Shao and Y. Tang},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {A low-cost digital self-interference cancellation structure for full-duplex communications},\n  year = {2016},\n  pages = {2355-2359},\n  abstract = {In full-duplex communications, existing digital self-interference cancellation (DSIC) scheme cancels strong self-interference (SI) in digital domain. To sample the strong self-interference, a high-performance analog-to-digital converter (ADC) is adopted and leads to a high cost. To reduce the cost of the DSIC stage, this paper proposes a novel DSIC scheme, in which the reconstructed SI is converted to analog form and subtracted from the incoming signal before the ADC stage. Then the ADC samples only the weak user signal and doesn't require high performance. This paper analyzes the signal-to-interference-plus-noise power ratio achieved by the proposed DSIC scheme and derives the closed-form expression. In addition, comparisons show that the proposed DSIC scheme is far more cost-effective than existing DSIC scheme. At the end, various simulations verify the ability of the proposed DSIC scheme to reduce the cost.},\n  keywords = {analogue-digital conversion;interference suppression;multiplexing;full-duplex communications;low-cost digital self-interference cancellation structure;strong self-interference;digital domain;high-performance analog-to-digital converter;DSIC scheme;signal-to-interference-plus-noise power ratio;Interference cancellation;Dynamic range;Receivers;Quantization (signal);Signal to noise ratio;Decoding;Full-duplex;digital self-interference cancellation;analog-to-digital converter;digital-to-analog converter;cost},\n  doi = {10.1109/EUSIPCO.2016.7760670},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255017.pdf},\n}\n\n
\n
\n\n\n
\n In full-duplex communications, existing digital self-interference cancellation (DSIC) scheme cancels strong self-interference (SI) in digital domain. To sample the strong self-interference, a high-performance analog-to-digital converter (ADC) is adopted and leads to a high cost. To reduce the cost of the DSIC stage, this paper proposes a novel DSIC scheme, in which the reconstructed SI is converted to analog form and subtracted from the incoming signal before the ADC stage. Then the ADC samples only the weak user signal and doesn't require high performance. This paper analyzes the signal-to-interference-plus-noise power ratio achieved by the proposed DSIC scheme and derives the closed-form expression. In addition, comparisons show that the proposed DSIC scheme is far more cost-effective than existing DSIC scheme. At the end, various simulations verify the ability of the proposed DSIC scheme to reduce the cost.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Approximateml estimator for compensation oftiming mismatch and jitter noise in TI-ADCS.\n \n \n \n\n\n \n Araghi, H.; Akhaee, M. A.; and Amini, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2360-2364, Aug 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760671,\n  author = {H. Araghi and M. A. Akhaee and A. Amini},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Approximateml estimator for compensation oftiming mismatch and jitter noise in TI-ADCS},\n  year = {2016},\n  pages = {2360-2364},\n  abstract = {Time-interleaved analog to digital converters (TI-ADC) offer high sampling rates by passing the input signal through C parallel low-rate ADCs. We can achieve C-times the sampling rate of a single ADC if all the shifts between the channels are identical. In practice, however, it is not possible to avoid mismatch among shifts. Besides, the samples are also subject to jitter noise. In this paper, we propose a blind method to mitigate the joint effects of sampling jitter and shift mismatch in the TI-ADC structure. We assume the input signal to be bandlimited and incorporate the jitter via a stochastic model. Next, we derive an approximate model based on a first-order Taylor series and use an iterative maximum likelihood estimator to reconstruct the uniform samples of the input signal. The simulation results show that with a slight increase in the mean square-error, we obtain a fast blind compensation algorithm.},\n  keywords = {analogue-digital conversion;iterative methods;maximum likelihood estimation;mean square error methods;stochastic processes;timing jitter;ML estimator approximation;timing mismatch compensation;jitter noise compensation;TI-ADCS;TI-ADC structure;sampling jitter effects;shift mismatch effects;stochastic model;first-order Taylor series;iterative maximum likelihood estimator;mean square error;fast blind compensation algorithm;time-interleaved analog to digital converter;Timing;Jitter;Maximum likelihood estimation;Bayes methods;Signal processing algorithms;Iterative methods;Europe;Bandlimited signals;jitter noise;maximum likelihood estimation;mismatch compensation;time-interleaved ADC},\n  doi = {10.1109/EUSIPCO.2016.7760671},\n  issn = {2076-1465},\n  month = {Aug},\n}\n\n
\n
\n\n\n
\n Time-interleaved analog to digital converters (TI-ADC) offer high sampling rates by passing the input signal through C parallel low-rate ADCs. We can achieve C-times the sampling rate of a single ADC if all the shifts between the channels are identical. In practice, however, it is not possible to avoid mismatch among shifts. Besides, the samples are also subject to jitter noise. In this paper, we propose a blind method to mitigate the joint effects of sampling jitter and shift mismatch in the TI-ADC structure. We assume the input signal to be bandlimited and incorporate the jitter via a stochastic model. Next, we derive an approximate model based on a first-order Taylor series and use an iterative maximum likelihood estimator to reconstruct the uniform samples of the input signal. The simulation results show that with a slight increase in the mean square-error, we obtain a fast blind compensation algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Full-rate general rank beamforming in single-group multicasting networks using non-orthogonal STBC.\n \n \n \n \n\n\n \n Taleb, D.; Liu, Y.; and Pesavento, M.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2365-2369, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Full-ratePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760672,\n  author = {D. Taleb and Y. Liu and M. Pesavento},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Full-rate general rank beamforming in single-group multicasting networks using non-orthogonal STBC},\n  year = {2016},\n  pages = {2365-2369},\n  abstract = {In this paper, we propose a novel single-group multicast beamforming technique using non-orthogonal space-time block coding (STBC). In the system, a multi-antenna base station broadcasts its information to a large group of single-antenna users. We introduce a modified max-min fair beamforming optimization problem, which maximizes the worst user's modified Euclidean distance instead of the conventional worst user's signal-to-noise ratio. Our beamforming formulation extends the traditional max-min fair problem, by taking the pairwise error probability of the employed STBC into consideration, which in turns improves the overall system performance. For the resulting non-convex quadratically constrained quadratic program, an iterative inner convex approximation method is devised, in which the non-convex part of the problem is linearized and a sequence of inner convex optimization problems is solved. Simulation results demonstrate considerable improvement for networks with a large number of users.},\n  keywords = {approximation theory;array signal processing;iterative methods;minimax techniques;multicast communication;space-time block codes;full-rate general rank beamforming;single-group multicasting networks;nonorthogonal STBC;single-group multicast beamforming technique;nonorthogonal space-time block coding;modified max-min fair beamforming optimization problem;modified Euclidean distance;beamforming formulation;nonconvex quadratically constrained quadratic program;inner convex optimization problems;Array signal processing;Optimization;Manganese;Multicast communication;Signal to noise ratio;Receivers;Pairwise error probability;Space-time block codes;general-rank transmit beamforming;pairwise error probability minimization;iterative convex optimization},\n  doi = {10.1109/EUSIPCO.2016.7760672},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255715.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a novel single-group multicast beamforming technique using non-orthogonal space-time block coding (STBC). In the system, a multi-antenna base station broadcasts its information to a large group of single-antenna users. We introduce a modified max-min fair beamforming optimization problem, which maximizes the worst user's modified Euclidean distance instead of the conventional worst user's signal-to-noise ratio. Our beamforming formulation extends the traditional max-min fair problem, by taking the pairwise error probability of the employed STBC into consideration, which in turns improves the overall system performance. For the resulting non-convex quadratically constrained quadratic program, an iterative inner convex approximation method is devised, in which the non-convex part of the problem is linearized and a sequence of inner convex optimization problems is solved. Simulation results demonstrate considerable improvement for networks with a large number of users.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust time-of-arrival self calibration with missing data and outliers.\n \n \n \n \n\n\n \n Batstone, K.; Oskarsson, M.; and Åström, K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2370-2374, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760673,\n  author = {K. Batstone and M. Oskarsson and K. Åström},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust time-of-arrival self calibration with missing data and outliers},\n  year = {2016},\n  pages = {2370-2374},\n  abstract = {The problem of estimating receiver-sender node positions from measured receiver-sender distances is a key issue in different applications such as microphone array calibration, radio antenna array calibration, mapping and positioning using ultra-wideband and mapping and positioning using round-trip-time measurements between mobile phones and Wi-Fi-units. Thanks to recent research in this area we have an increased understanding of the geometry of this problem. In this paper, we study the problem of missing information and the presence of outliers in the data. We propose a novel hypothesis and test framework that efficiently finds initial estimates of the unknown parameters and combine such methods with optimization techniques to obtain accurate and robust systems. The proposed systems are evaluated against current state-of-the-art methods on a large set of benchmark tests. This is evaluated further on Wi-Fi round-trip time and ultra-wideband measurements to give a realistic example of self calibration for indoor localization.},\n  keywords = {calibration;indoor navigation;indoor radio;optimisation;position measurement;radionavigation;time-of-arrival estimation;robust time-of-arrival self calibration;missing data;receiver-sender node position estimation;missing information problem;optimization techniques;indoor localization;Receivers;Calibration;Optimization;Robustness;Microphones;Radio transmitters},\n  doi = {10.1109/EUSIPCO.2016.7760673},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255858.pdf},\n}\n\n
\n
\n\n\n
\n The problem of estimating receiver-sender node positions from measured receiver-sender distances is a key issue in different applications such as microphone array calibration, radio antenna array calibration, mapping and positioning using ultra-wideband and mapping and positioning using round-trip-time measurements between mobile phones and Wi-Fi-units. Thanks to recent research in this area we have an increased understanding of the geometry of this problem. In this paper, we study the problem of missing information and the presence of outliers in the data. We propose a novel hypothesis and test framework that efficiently finds initial estimates of the unknown parameters and combine such methods with optimization techniques to obtain accurate and robust systems. The proposed systems are evaluated against current state-of-the-art methods on a large set of benchmark tests. This is evaluated further on Wi-Fi round-trip time and ultra-wideband measurements to give a realistic example of self calibration for indoor localization.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Two-way beamforming optimization for full-duplex SWIPT systems.\n \n \n \n \n\n\n \n Okandeji, A. A.; Khandaker, M. R. A.; and Wong, K.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2375-2379, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Two-wayPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760674,\n  author = {A. A. Okandeji and M. R. A. Khandaker and K. Wong},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Two-way beamforming optimization for full-duplex SWIPT systems},\n  year = {2016},\n  pages = {2375-2379},\n  abstract = {In this paper, we investigate the problem of two-way relay beamforming optimization to maximize the achievable sum-rate of a simultaneous wireless information and power transfer (SWIPT) system with a full-duplex (FD) multiple-input multiple-output (MIMO) amplify-and-forward (AF) relay. In particular, we address the optimal joint design of the receiver power splitting (PS) ratio and the beamforming matrix at the relay node given the channel state information (CSI). Our contribution is an iterative algorithm based on difference of convex (DC) programming and one-dimensional searching to achieve the joint optimal solution. Simulation results are provided to demonstrate the effectiveness of the proposed algorithm.},\n  keywords = {amplify and forward communication;array signal processing;matrix algebra;MIMO communication;optimisation;radiofrequency power transmission;relay networks (telecommunication);telecommunication power management;wireless channels;CSI;channel state information;beamforming matrix;receiver power splitting ratio;FD MIMO AF relay;full-duplex multiple-input multiple-output amplify-and-forward relay;two-way relay beamforming optimization;full-duplex SWIPT systems;simultaneous wireless information and power transfer system;Relays;Optimization;Array signal processing;Silicon;Delays;MIMO;Receiving antennas},\n  doi = {10.1109/EUSIPCO.2016.7760674},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570255931.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we investigate the problem of two-way relay beamforming optimization to maximize the achievable sum-rate of a simultaneous wireless information and power transfer (SWIPT) system with a full-duplex (FD) multiple-input multiple-output (MIMO) amplify-and-forward (AF) relay. In particular, we address the optimal joint design of the receiver power splitting (PS) ratio and the beamforming matrix at the relay node given the channel state information (CSI). Our contribution is an iterative algorithm based on difference of convex (DC) programming and one-dimensional searching to achieve the joint optimal solution. Simulation results are provided to demonstrate the effectiveness of the proposed algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse channel estimation based on a reweighted least-mean mixed-norm adaptive filter algorithm.\n \n \n \n \n\n\n \n Li, Y.; Wang, Y.; and Albu, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2380-2384, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760675,\n  author = {Y. Li and Y. Wang and F. Albu},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse channel estimation based on a reweighted least-mean mixed-norm adaptive filter algorithm},\n  year = {2016},\n  pages = {2380-2384},\n  abstract = {A sparsity-aware least-mean mixed-norm (LMMN) adaptive filter algorithm is proposed for sparse channel estimation applications. The proposed algorithm is realized by incorporating a sum-log function constraint into the cost function of a LMMN which is a mixed norm controlled by a scalar-mixing parameter. As a result, a shrinkage is given to enhance the performance of the LMMN algorithm when the majority of the channel taps are zeros or near-zeros. The channel estimation behaviors of the proposed reweighted sparse LMMN algorithm is investigated and discussed in comparison with those of the standard LMS and the least-mean square/fourth (LMS/F) and previously sparse LMS/F algorithms. The simulation results show that the proposed reweighted sparse LMMN algorithm is superior to aforementioned algorithms with respect to the convergence speed and steady-state error floor.},\n  keywords = {adaptive filters;channel estimation;least mean squares methods;sparse channel estimation;reweighted least-mean adaptive filter;mixed-norm adaptive filter;sparsity-aware adaptive filter;sum-log function constraint;scalar-mixing parameter;channel taps;convergence speed;steady-state error floor;Signal processing algorithms;Channel estimation;Convergence;Steady-state;Estimation;Cost function;Europe;least-mean mixed-norm;LMS;LMS/F;sparse channel estimation;sparse adaptive filtering},\n  doi = {10.1109/EUSIPCO.2016.7760675},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570248327.pdf},\n}\n\n
\n
\n\n\n
\n A sparsity-aware least-mean mixed-norm (LMMN) adaptive filter algorithm is proposed for sparse channel estimation applications. The proposed algorithm is realized by incorporating a sum-log function constraint into the cost function of a LMMN which is a mixed norm controlled by a scalar-mixing parameter. As a result, a shrinkage is given to enhance the performance of the LMMN algorithm when the majority of the channel taps are zeros or near-zeros. The channel estimation behaviors of the proposed reweighted sparse LMMN algorithm is investigated and discussed in comparison with those of the standard LMS and the least-mean square/fourth (LMS/F) and previously sparse LMS/F algorithms. The simulation results show that the proposed reweighted sparse LMMN algorithm is superior to aforementioned algorithms with respect to the convergence speed and steady-state error floor.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Filtering smooth altimetric signals using a Bayesian algorithm.\n \n \n \n \n\n\n \n Halimi, A.; Buller, G.; McLaughlin, S.; and Honeine, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2385-2389, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"FilteringPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760676,\n  author = {A. Halimi and G. Buller and S. McLaughlin and P. Honeine},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Filtering smooth altimetric signals using a Bayesian algorithm},\n  year = {2016},\n  pages = {2385-2389},\n  abstract = {This paper presents a new Bayesian strategy for the estimation of smooth signals corrupted by Gaussian noise. The method assumes a smooth evolution of a succession of continuous signals that can have a numerical or an analytical expression with respect to some parameters. The Bayesian model proposed takes into account the Gaussian properties of the noise and the smooth evolution of the successive signals. In addition, a gamma Markov random field prior is assigned to the signal energies and to the noise variances to account for their known properties. The resulting posterior distribution is maximized using a fast coordinate descent algorithm whose parameters are updated by analytical expressions. The proposed algorithm is tested on satellite altimetric data demonstrating good denoising results on both synthetic and real signals. The proposed algorithm is also shown to improve the quality of the altimetric parameters when combined with a parameter estimation strategy.},\n  keywords = {filtering theory;Gaussian noise;Markov processes;smooth altimetric signals filtering;Bayesian algorithm;Bayesian strategy;Bayesian model;Gaussian properties;gamma Markov random;posterior distribution;parameter estimation strategy;altimetric parameters;Signal processing algorithms;Bayes methods;Logic gates;Satellites;Estimation;Gaussian noise;Correlation;Altimetry;Bayesian algorithm;coordinate descent algorithm;gamma Markov random fields},\n  doi = {10.1109/EUSIPCO.2016.7760676},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251137.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a new Bayesian strategy for the estimation of smooth signals corrupted by Gaussian noise. The method assumes a smooth evolution of a succession of continuous signals that can have a numerical or an analytical expression with respect to some parameters. The Bayesian model proposed takes into account the Gaussian properties of the noise and the smooth evolution of the successive signals. In addition, a gamma Markov random field prior is assigned to the signal energies and to the noise variances to account for their known properties. The resulting posterior distribution is maximized using a fast coordinate descent algorithm whose parameters are updated by analytical expressions. The proposed algorithm is tested on satellite altimetric data demonstrating good denoising results on both synthetic and real signals. The proposed algorithm is also shown to improve the quality of the altimetric parameters when combined with a parameter estimation strategy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Near-end listening enhancement by noise-inverse speech shaping.\n \n \n \n \n\n\n \n Niermann, M.; Jax, P.; and Vary, P.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2390-2394, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Near-endPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760677,\n  author = {M. Niermann and P. Jax and P. Vary},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Near-end listening enhancement by noise-inverse speech shaping},\n  year = {2016},\n  pages = {2390-2394},\n  abstract = {In communication systems, clean speech is often reproduced by loudspeakers and disturbed by local acoustical noise. Near-end listening enhancement (NELE) is a technique to enhance the speech intelligibility in environmental noise by adaptively preprocessing the speech based on a noise estimate. Conventional NELE-algorithms adaptively filter the speech by applying spectral gains which are determined by maximizing intelligibility measures. Usually, this leads to speech amplifications at highly disturbed frequencies to overcome masking. In this paper, a new approach is presented which shapes the speech spectrum according to the inverse of the noise power spectrum. It is based on a simple gain rule. Its advantages are a predictable spectral behavior and a fixed computational complexity, since no optimization problem with an unknown number of iterations needs to be solved. Simulations have shown that it copes with a wide range of noise types and provides a similar performance compared to conventional algorithms.},\n  keywords = {noise (working environment);speech intelligibility;speech processing;noise power spectrum;speech spectrum;speech amplifications;noise estimate;environmental noise;speech intelligibility;noise-inverse speech shaping;near-end listening enhancement;Speech;Signal to noise ratio;Signal processing algorithms;Speech enhancement;Frequency-domain analysis;Shape},\n  doi = {10.1109/EUSIPCO.2016.7760677},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251568.pdf},\n}\n\n
\n
\n\n\n
\n In communication systems, clean speech is often reproduced by loudspeakers and disturbed by local acoustical noise. Near-end listening enhancement (NELE) is a technique to enhance the speech intelligibility in environmental noise by adaptively preprocessing the speech based on a noise estimate. Conventional NELE-algorithms adaptively filter the speech by applying spectral gains which are determined by maximizing intelligibility measures. Usually, this leads to speech amplifications at highly disturbed frequencies to overcome masking. In this paper, a new approach is presented which shapes the speech spectrum according to the inverse of the noise power spectrum. It is based on a simple gain rule. Its advantages are a predictable spectral behavior and a fixed computational complexity, since no optimization problem with an unknown number of iterations needs to be solved. Simulations have shown that it copes with a wide range of noise types and provides a similar performance compared to conventional algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic localization of gas pipes from GPR imagery.\n \n \n \n \n\n\n \n Terrasse, G.; Nicolas, J.; Trouvé, E.; and Drouet, É.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2395-2399, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760678,\n  author = {G. Terrasse and J. Nicolas and E. Trouvé and É. Drouet},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic localization of gas pipes from GPR imagery},\n  year = {2016},\n  pages = {2395-2399},\n  abstract = {In order to improve asset knowledge and avoid third part damages during road works, the localization of gas pipes in a non-destructive way has become a wide domain of research during these last years. Several devices have been developed in order to answer this problem. Acoustic, electromagnetic or RFID technologies are used to find pipes in the underground. Ground Penetrating Radar (GPR) is also used to detect buried gas pipes. However it does not directly provide a 3D position but a reflection map called B-scan that the user must interpret. In this paper, we propose a novel method to automatically get the position of gas pipes with GPR acquisitions. This method uses a dictionary of theoretical pipe signatures. The correlation between each atom from the dictionary and the B-scan is used as feature in a two part supervised learning scheme. Our method has been applied to real data acquired on a test area and in real condition. The proposed method presents satisfying qualitative and quantitative results compared to other methods.},\n  keywords = {data acquisition;ground penetrating radar;learning (artificial intelligence);radar imaging;telecommunication computing;gas pipe automatic localization;GPR imagery;asset knowledge improvement;third-part damage avoidance;RFID technology;electromagnetic technology;acoustic technology;ground penetrating radar;buried gas pipe detection;B-scan;reflection map;GPR acquisition;two-part supervised learning scheme;Dictionaries;Ground penetrating radar;Shape;Supervised learning;Correlation;Clutter;Computational modeling;Gas pipes localization;GPR;Dictionary;Supervised learning},\n  doi = {10.1109/EUSIPCO.2016.7760678},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251626.pdf},\n}\n\n
\n
\n\n\n
\n In order to improve asset knowledge and avoid third part damages during road works, the localization of gas pipes in a non-destructive way has become a wide domain of research during these last years. Several devices have been developed in order to answer this problem. Acoustic, electromagnetic or RFID technologies are used to find pipes in the underground. Ground Penetrating Radar (GPR) is also used to detect buried gas pipes. However it does not directly provide a 3D position but a reflection map called B-scan that the user must interpret. In this paper, we propose a novel method to automatically get the position of gas pipes with GPR acquisitions. This method uses a dictionary of theoretical pipe signatures. The correlation between each atom from the dictionary and the B-scan is used as feature in a two part supervised learning scheme. Our method has been applied to real data acquired on a test area and in real condition. The proposed method presents satisfying qualitative and quantitative results compared to other methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse decomposition of the GPR useful signal from hyperbola dictionary.\n \n \n \n \n\n\n \n Terrasse, G.; Nicolas, J.; Trouvé, E.; and Drouet, É.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2400-2404, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760679,\n  author = {G. Terrasse and J. Nicolas and E. Trouvé and É. Drouet},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse decomposition of the GPR useful signal from hyperbola dictionary},\n  year = {2016},\n  pages = {2400-2404},\n  abstract = {In order to improve asset knowledge and avoid third part damages during road works, the localization of gas pipes in a non-destructive way has become a wide domain of research during these last years. The Ground Penetrating Radar (GPR) is used to detect buried gas pipes. However it does not directly provide a 3D position but a reflection map also called B-scan that the user must interpret. In order to facilitate the B-scan interpretation, we propose to use a dictionary of theoretical pipe signatures. One of the most popular method to compute the coefficients is the sparse coding. Nevertheless, clutter which is noticeable by its horizontal shape makes difficult to decompose it into sparse coefficients with this dictionary. Then a low-rank matrix constraint which models the clutter is applied in order to decompose the useful signal into sparse coefficients in a blind source separation framework. Our method has been applied to simulated and real data acquired on a test area. The proposed method presents satisfying qualitative and quantitative results.},\n  keywords = {blind source separation;buried object detection;encoding;ground penetrating radar;pipes;radar clutter;radar signal processing;road building;sparse matrices;buried gas pipe detection;reflection map;B-scan interpretation;sparse coding;radar clutter;low-rank matrix constraint;blind source separation framework;asset knowledge improvement;road work;third part damage avoidance;ground penetrating radar;nondestructive way;gas pipe localization;hyperbola dictionary;GPR useful signal sparse decomposition;Dictionaries;Shape;Clutter;Ground penetrating radar;Encoding;Signal processing algorithms;Matching pursuit algorithms;Gas pipes localization;GPR;Dictionary;Convolutional sparse coding;Nuclear norm;Blind source separation},\n  doi = {10.1109/EUSIPCO.2016.7760679},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251629.pdf},\n}\n\n
\n
\n\n\n
\n In order to improve asset knowledge and avoid third part damages during road works, the localization of gas pipes in a non-destructive way has become a wide domain of research during these last years. The Ground Penetrating Radar (GPR) is used to detect buried gas pipes. However it does not directly provide a 3D position but a reflection map also called B-scan that the user must interpret. In order to facilitate the B-scan interpretation, we propose to use a dictionary of theoretical pipe signatures. One of the most popular method to compute the coefficients is the sparse coding. Nevertheless, clutter which is noticeable by its horizontal shape makes difficult to decompose it into sparse coefficients with this dictionary. Then a low-rank matrix constraint which models the clutter is applied in order to decompose the useful signal into sparse coefficients in a blind source separation framework. Our method has been applied to simulated and real data acquired on a test area. The proposed method presents satisfying qualitative and quantitative results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distance estimation for marine vehicles using a monocular video camera.\n \n \n \n \n\n\n \n Gladstone, R.; Moshe, Y.; Barel, A.; and Shenhav, E.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2405-2409, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"DistancePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760680,\n  author = {R. Gladstone and Y. Moshe and A. Barel and E. Shenhav},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Distance estimation for marine vehicles using a monocular video camera},\n  year = {2016},\n  pages = {2405-2409},\n  abstract = {Distance estimation for marine vessels is of vital importance to unmanned ship vehicles (USVs) for navigation and collision prevention. This can be achieved by means such as radar or laser sighting. However, due to constraints of the USV, it may be desired to estimate distance using a monocular camera. In this paper, we propose a method that, given a video of a marine vehicle in a maritime environment and a tracker, estimates the distance of the tracked vehicle from the camera. The method detects the horizon and uses its distance as a reference. It detects the contact point of the vehicle with the sea surface by finding a maximally stable extremal region (MSER). Then, it relies on geometries of the earth and on optical properties of the camera to compute the distance. The method was tested on video footage of several sea maneuvers with an average error of 7.1%.},\n  keywords = {autonomous underwater vehicles;video signal processing;marine vehicle;distance estimation;monocular video camera;unmanned ship vehicle;USV;navigation prevention;collision prevention;horizon detection;maximally stable extremal region;MSER;sea maneuver video footage;vehicle contact point detection;Cameras;Sea surface;Estimation;Marine vehicles;Image edge detection;Vehicles;Radar tracking;unmanned ship vehicles;USV;distance estimation;horizon detection;marine navigation},\n  doi = {10.1109/EUSIPCO.2016.7760680},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251818.pdf},\n}\n\n
\n
\n\n\n
\n Distance estimation for marine vessels is of vital importance to unmanned ship vehicles (USVs) for navigation and collision prevention. This can be achieved by means such as radar or laser sighting. However, due to constraints of the USV, it may be desired to estimate distance using a monocular camera. In this paper, we propose a method that, given a video of a marine vehicle in a maritime environment and a tracker, estimates the distance of the tracked vehicle from the camera. The method detects the horizon and uses its distance as a reference. It detects the contact point of the vehicle with the sea surface by finding a maximally stable extremal region (MSER). Then, it relies on geometries of the earth and on optical properties of the camera to compute the distance. The method was tested on video footage of several sea maneuvers with an average error of 7.1%.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the use of auto-regressive modeling for arrhythmia detection.\n \n \n \n \n\n\n \n Adnane, M.; and Belouchrani, A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2410-2414, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760681,\n  author = {M. Adnane and A. Belouchrani},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On the use of auto-regressive modeling for arrhythmia detection},\n  year = {2016},\n  pages = {2410-2414},\n  abstract = {This paper investigates the use of an auto-regressive modeling method for the classification of heartbeats into two categories: Normal beats (N) and Ventricular ectopic beats (VEB). The method is based on an auto-regressive modeling (AR) of QRS complexes. Each heartbeat is characterized by its AR coefficients. Then, K-nearest neighbor (K-NN) classifier uses the AR coefficients to discriminate between N beats and VEB. In addition, the use of AR modeling prediction error en as a discriminating feature is investigated. Results show that the prediction error power (σ2p) enhances significantly the classification accuracy. The proposed classifier is compared to a classifier based on the use of RR timing information. Finally, the two classifiers are combined together where the classification result is given by the agreement of the two classifiers. The proposed AR modeling approach performs better than the RR interval-based classifier and their combination enhances the classification accuracy.},\n  keywords = {autoregressive processes;diseases;electrocardiography;feature extraction;medical signal processing;signal classification;autoregressive modeling;arrhythmia detection;heartbeat classification;ventricular ectopic beats;QRS complexes;AR coefficients;K-nearest neighbor classifier;KNN;AR modeling prediction error;discriminating feature;prediction error power;classification accuracy;RR timing information;RR interval-based classifier;Heart beat;Electrocardiography;Feature extraction;Testing;Heart rate variability;Pregnancy;Training},\n  doi = {10.1109/EUSIPCO.2016.7760681},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251833.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates the use of an auto-regressive modeling method for the classification of heartbeats into two categories: Normal beats (N) and Ventricular ectopic beats (VEB). The method is based on an auto-regressive modeling (AR) of QRS complexes. Each heartbeat is characterized by its AR coefficients. Then, K-nearest neighbor (K-NN) classifier uses the AR coefficients to discriminate between N beats and VEB. In addition, the use of AR modeling prediction error en as a discriminating feature is investigated. Results show that the prediction error power (σ2p) enhances significantly the classification accuracy. The proposed classifier is compared to a classifier based on the use of RR timing information. Finally, the two classifiers are combined together where the classification result is given by the agreement of the two classifiers. The proposed AR modeling approach performs better than the RR interval-based classifier and their combination enhances the classification accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On fractional delay interpolation for local wave field synthesis.\n \n \n \n \n\n\n \n Winter, F.; and Spors, S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2415-2419, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760682,\n  author = {F. Winter and S. Spors},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {On fractional delay interpolation for local wave field synthesis},\n  year = {2016},\n  pages = {2415-2419},\n  abstract = {Wave Field Synthesis aims at the accurate reproduction of a sound field inside an extended listening area which is surrounded by individually driven loudspeakers. Recently a Local Wave Field Synthesis technique has been published which utilizes focused sources as a distribution of virtual loudspeakers in order to increase the reproduction accuracy in a particular local region. Similar to conventional Wave Field Synthesis, this technique relies heavily on delaying and weighting the input signals of the virtual sound sources. As these delays are in general not an integer multiple of the input signals' sample rate, delay interpolation is necessary. This paper analyses in how far the accuracy of the delay interpolation influences the spectral properties of the synthesised sound field. The results show, that an upsampling of the virtual source's input signal is an computationally efficient tool which leads to a significant increase of accuracy.},\n  keywords = {acoustic field;acoustic signal processing;interpolation;loudspeakers;signal synthesis;sound reproduction;fractional delay interpolation;local wave field synthesis technique;virtual loudspeaker distribution;sound field synthesis spectral property;virtual source input signal upsampling;sound field reproduction;Delays;Interpolation;Loudspeakers;Finite impulse response filters;IIR filters;Europe;fractional delay filter;delay interpolation;sound field synthesis;wave field synthesis;local wave field synthesis;Lagrange interpolation},\n  doi = {10.1109/EUSIPCO.2016.7760682},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251935.pdf},\n}\n\n
\n
\n\n\n
\n Wave Field Synthesis aims at the accurate reproduction of a sound field inside an extended listening area which is surrounded by individually driven loudspeakers. Recently a Local Wave Field Synthesis technique has been published which utilizes focused sources as a distribution of virtual loudspeakers in order to increase the reproduction accuracy in a particular local region. Similar to conventional Wave Field Synthesis, this technique relies heavily on delaying and weighting the input signals of the virtual sound sources. As these delays are in general not an integer multiple of the input signals' sample rate, delay interpolation is necessary. This paper analyses in how far the accuracy of the delay interpolation influences the spectral properties of the synthesised sound field. The results show, that an upsampling of the virtual source's input signal is an computationally efficient tool which leads to a significant increase of accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Keypoint detection in RGBD images based on an efficient viewpoint-covariant multiscale representation.\n \n \n \n \n\n\n \n Karpushin, M.; Valenzise, G.; and Dufaux, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2420-2424, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"KeypointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760683,\n  author = {M. Karpushin and G. Valenzise and F. Dufaux},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Keypoint detection in RGBD images based on an efficient viewpoint-covariant multiscale representation},\n  year = {2016},\n  pages = {2420-2424},\n  abstract = {Texture+depth (RGBD) images provide information about the geometry of a scene, which could help improve current image matching performance, e.g., in presence of large viewpoint changes. While depth has been mainly used for processing keypoint descriptors, in this paper we focus on the keypoint detection problem. In order to produce a computationally efficient viewpoint-covariant multiscale representation, we design an image smoothing procedure which locally smooths a texture image based on the corresponding depth. This yields an approximated scale space, where we can find keypoints using a multiscale detector approach. Our experiments on both synthetic and real-world images show substantial gains with respect to 2D and other RGBD feature extraction approaches.},\n  keywords = {feature extraction;image matching;image representation;image texture;keypoint detection;RGBD images;viewpoint-covariant multiscale representation;image matching;image smoothing procedure;texture image;approximated scale space;multiscale detector approach;synthetic images;real-world images;feature extraction;Europe;Signal processing;Conferences;RGBD;texture+depth;scale space;keypoint detection;visual odometry},\n  doi = {10.1109/EUSIPCO.2016.7760683},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256276.pdf},\n}\n\n
\n
\n\n\n
\n Texture+depth (RGBD) images provide information about the geometry of a scene, which could help improve current image matching performance, e.g., in presence of large viewpoint changes. While depth has been mainly used for processing keypoint descriptors, in this paper we focus on the keypoint detection problem. In order to produce a computationally efficient viewpoint-covariant multiscale representation, we design an image smoothing procedure which locally smooths a texture image based on the corresponding depth. This yields an approximated scale space, where we can find keypoints using a multiscale detector approach. Our experiments on both synthetic and real-world images show substantial gains with respect to 2D and other RGBD feature extraction approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Interpolation-based image inpainting in color images using high dimensional model representation.\n \n \n \n \n\n\n \n Karaca, E.; and Tunga, M. A.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2425-2429, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Interpolation-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760684,\n  author = {E. Karaca and M. A. Tunga},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Interpolation-based image inpainting in color images using high dimensional model representation},\n  year = {2016},\n  pages = {2425-2429},\n  abstract = {Image inpainting is the process of filling missing or fixing corrupted regions in a given image. The intensity values of the pixels in missing area are expected to be associated with the pixels in the surrounding area. Interpolation-based methods that can solve the problem with a high accuracy may become inefficient when the dimension of the data increases. We solve this problem by representing images with lower dimensions using High Dimensional Model Representation method. We then perform Lagrange interpolation on the lower dimensional data to find the intensity values of the missing pixels. In order to use High Dimensional Model Representation method and to improve the accuracy of Lagrange interpolation, we also propose a procedure that decompose missing regions into smaller ones and perform inpainting hierarchically starting from the smallest region. Experimental results demonstrate that the proposed method produces better results than the variational and exemplar-based inpainting approaches in most of the test images.},\n  keywords = {image colour analysis;image representation;image restoration;interpolation;interpolation-based image inpainting;color images;high dimensional model representation;Lagrange interpolation;exemplar-based inpainting;Interpolation;Mathematical model;Europe;Signal processing;Color;Signal processing algorithms;Gray-scale},\n  doi = {10.1109/EUSIPCO.2016.7760684},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256295.pdf},\n}\n\n
\n
\n\n\n
\n Image inpainting is the process of filling missing or fixing corrupted regions in a given image. The intensity values of the pixels in missing area are expected to be associated with the pixels in the surrounding area. Interpolation-based methods that can solve the problem with a high accuracy may become inefficient when the dimension of the data increases. We solve this problem by representing images with lower dimensions using High Dimensional Model Representation method. We then perform Lagrange interpolation on the lower dimensional data to find the intensity values of the missing pixels. In order to use High Dimensional Model Representation method and to improve the accuracy of Lagrange interpolation, we also propose a procedure that decompose missing regions into smaller ones and perform inpainting hierarchically starting from the smallest region. Experimental results demonstrate that the proposed method produces better results than the variational and exemplar-based inpainting approaches in most of the test images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Combining feature-based and model-based approaches for robust ellipse detection.\n \n \n \n \n\n\n \n Cakir, H. I.; Benligiray, B.; and Topal, C.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2430-2434, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"CombiningPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760685,\n  author = {H. I. Cakir and B. Benligiray and C. Topal},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Combining feature-based and model-based approaches for robust ellipse detection},\n  year = {2016},\n  pages = {2430-2434},\n  abstract = {Fast and robust ellipse detection is a vital step in many image processing and computer vision applications. Two main approaches exist for ellipse detection, i.e., model-based and feature-based. Model-based methods require much more computation, but they can perform better in occlusions. Feature-based approaches are fast but may perform insufficient in cluttered cases. In this study, we propose an hybrid method which combines both approaches to accelerate the process without compromising accuracy. We extract elliptical arcs to narrow down search space by obtaining seeds for prospective ellipses. For each seed arc, we compute a limited search region consisting of hypothetical ellipses that each can be formed with that seed. Later, we vote them on the edge image to determine best hypothesis among the all, if exists. We tested the proposed algorithm on a public dataset and promising results are obtained compare to state of the art methods in the literature.},\n  keywords = {computer vision;feature extraction;feature-based approach;model-based approach;robust ellipse detection;image processing applications;computer vision applications;hybrid method;elliptical arc extraction;hypothetical ellipse;public dataset;Image edge detection;Image segmentation;Computational modeling;Feature extraction;Signal processing algorithms;Genetic algorithms;Europe},\n  doi = {10.1109/EUSIPCO.2016.7760685},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256298.pdf},\n}\n\n
\n
\n\n\n
\n Fast and robust ellipse detection is a vital step in many image processing and computer vision applications. Two main approaches exist for ellipse detection, i.e., model-based and feature-based. Model-based methods require much more computation, but they can perform better in occlusions. Feature-based approaches are fast but may perform insufficient in cluttered cases. In this study, we propose an hybrid method which combines both approaches to accelerate the process without compromising accuracy. We extract elliptical arcs to narrow down search space by obtaining seeds for prospective ellipses. For each seed arc, we compute a limited search region consisting of hypothetical ellipses that each can be formed with that seed. Later, we vote them on the edge image to determine best hypothesis among the all, if exists. We tested the proposed algorithm on a public dataset and promising results are obtained compare to state of the art methods in the literature.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Unsupervised change detection between multi-sensor high resolution satellite images.\n \n \n \n \n\n\n \n Liu, G.; Delon, J.; Gousseau, Y.; and Tupin, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2435-2439, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"UnsupervisedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760686,\n  author = {G. Liu and J. Delon and Y. Gousseau and F. Tupin},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Unsupervised change detection between multi-sensor high resolution satellite images},\n  year = {2016},\n  pages = {2435-2439},\n  abstract = {In this paper, we present a novel unsupervised framework for change detection between two high resolution remote sensing images. Thanks to the use of local descriptors, the method does not need any image co-registration and is able to identify changes even with images acquired from different incidence angles and by different sensors. Local descriptors are used to both locally align images and identify changes. The setting of thresholds as well as the final grouping of isolated changes are performed thanks to a contrario statistical procedures. This provides a complete and automatic pipeline, whose efficiency is shown through several challenging pairs of high resolution urban images, acquired through different satellites.},\n  keywords = {geophysical image processing;image registration;image resolution;image segmentation;remote sensing;statistical analysis;Unsupervised change detection;multisensor high resolution satellite image;high resolution urban image;automatic pipeline;contrario statistical procedure;thresholds setting;local descriptor;incidence angle;high resolution remote sensing image;Transforms;Image resolution;Remote sensing;Image sensors;Sensors;Buildings;Robustness},\n  doi = {10.1109/EUSIPCO.2016.7760686},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256321.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present a novel unsupervised framework for change detection between two high resolution remote sensing images. Thanks to the use of local descriptors, the method does not need any image co-registration and is able to identify changes even with images acquired from different incidence angles and by different sensors. Local descriptors are used to both locally align images and identify changes. The setting of thresholds as well as the final grouping of isolated changes are performed thanks to a contrario statistical procedures. This provides a complete and automatic pipeline, whose efficiency is shown through several challenging pairs of high resolution urban images, acquired through different satellites.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Lossless intra coding in HEVC with integer-to-integer DST.\n \n \n \n \n\n\n \n Kamisli, F.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2440-2444, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"LosslessPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760687,\n  author = {F. Kamisli},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Lossless intra coding in HEVC with integer-to-integer DST},\n  year = {2016},\n  pages = {2440-2444},\n  abstract = {It is desirable to support efficient lossless coding within video coding standards, which are primarily designed for lossy coding, with as little modification as possible. A simple approach is to skip transform and quantization, and directly entropy code the prediction residual, but this is inefficient for compression. A more efficient and popular approach is to process the residual block with DPCM prior to entropy coding. This paper explores an alternative approach based on processing the residual block with integer-to-integer (i2i) transforms. I2i transforms map integers to integers, however, unlike the integer transforms used in HEVC for lossy coding, they do not increase the dynamic range at the output and can be used in lossless coding. We focus on lossless intra coding and use both an i2i DCT from the literature and a novel i2i approximation of the odd type-3 DST. Experiments with the HEVC reference software show improved results.},\n  keywords = {discrete transforms;entropy codes;video coding;lossless intra coding;video coding standards;discrete sine transforms;integer-to-integer DST;directly entropy code;prediction residual;residual block;I2i transforms;integer transforms;i2i approximation;odd type-3 DST;high efficiency video coding;HEVC reference software;Encoding;Discrete cosine transforms;Image coding;Standards;Entropy;Pulse modulation;Image coding;Video Coding;Discrete cosine transforms;Lossless coding;HEVC},\n  doi = {10.1109/EUSIPCO.2016.7760687},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570256403.pdf},\n}\n\n
\n
\n\n\n
\n It is desirable to support efficient lossless coding within video coding standards, which are primarily designed for lossy coding, with as little modification as possible. A simple approach is to skip transform and quantization, and directly entropy code the prediction residual, but this is inefficient for compression. A more efficient and popular approach is to process the residual block with DPCM prior to entropy coding. This paper explores an alternative approach based on processing the residual block with integer-to-integer (i2i) transforms. I2i transforms map integers to integers, however, unlike the integer transforms used in HEVC for lossy coding, they do not increase the dynamic range at the output and can be used in lossless coding. We focus on lossless intra coding and use both an i2i DCT from the literature and a novel i2i approximation of the odd type-3 DST. Experiments with the HEVC reference software show improved results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Piecewise linear regression based on adaptive tree structure using second order methods.\n \n \n \n \n\n\n \n Civek, B. C.; Delibalta, I.; and Kozat, S. S.\n\n\n \n\n\n\n In 2016 24th European Signal Processing Conference (EUSIPCO), pages 2445-2449, Aug 2016. \n \n\n\n\n
\n\n\n\n \n \n \"PiecewisePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{7760688,\n  author = {B. C. Civek and I. Delibalta and S. S. Kozat},\n  booktitle = {2016 24th European Signal Processing Conference (EUSIPCO)},\n  title = {Piecewise linear regression based on adaptive tree structure using second order methods},\n  year = {2016},\n  pages = {2445-2449},\n  abstract = {We introduce a highly efficient online nonlinear regression algorithm. We process the data in a truly online manner such that no storage is needed, i.e., the data is discarded after used. For nonlinear modeling we use a hierarchical piecewise linear approach based on the notion of decision trees, where the regressor space is adaptively partitioned based directly on the performance. As the first time in the literature, we learn both the piecewise linear partitioning of the regressor space as well as the linear models in each region using highly effective second order methods, i.e., Newton-Raphson Methods. Hence, we avoid the well known over fitting issues and achieve substantial performance compared to the state of the art. We demonstrate our gains over the well known benchmark data sets and provide performance results in an individual sequence manner guaranteed to hold without any statistical assumptions.},\n  keywords = {Newton-Raphson method;piecewise linear techniques;regression analysis;signal processing;trees (mathematics);second order method;adaptive tree structure;piecewise linear regression;signal processing;benchmark data set;fitting issues;Newton-Raphson method;piecewise linear partitioning;regressor space;hierarchical piecewise linear approach;highly efficient online nonlinear regression algorithm;Signal processing algorithms;Partitioning algorithms;Two dimensional displays;Adaptation models;Signal processing;Data models;Europe;Hierarchical tree;big data;online learning;piecewise linear regression;Newton method},\n  doi = {10.1109/EUSIPCO.2016.7760688},\n  issn = {2076-1465},\n  month = {Aug},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2016/papers/1570251103.pdf},\n}\n
\n
\n\n\n
\n We introduce a highly efficient online nonlinear regression algorithm. We process the data in a truly online manner such that no storage is needed, i.e., the data is discarded after used. For nonlinear modeling we use a hierarchical piecewise linear approach based on the notion of decision trees, where the regressor space is adaptively partitioned based directly on the performance. As the first time in the literature, we learn both the piecewise linear partitioning of the regressor space as well as the linear models in each region using highly effective second order methods, i.e., Newton-Raphson Methods. Hence, we avoid the well known over fitting issues and achieve substantial performance compared to the state of the art. We demonstrate our gains over the well known benchmark data sets and provide performance results in an individual sequence manner guaranteed to hold without any statistical assumptions.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);