var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https%3A%2F%2Fraw.githubusercontent.com%2FRoznn%2FEUSIPCO%2Fmain%2Feusipco2014url.bib&jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https%3A%2F%2Fraw.githubusercontent.com%2FRoznn%2FEUSIPCO%2Fmain%2Feusipco2014url.bib&jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https%3A%2F%2Fraw.githubusercontent.com%2FRoznn%2FEUSIPCO%2Fmain%2Feusipco2014url.bib&jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2014\n \n \n (508)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Sparse linear parametric modeling of room acoustics with Orthonormal Basis Functions.\n \n \n \n \n\n\n \n Vairetti, G.; van Waterschoot , T.; Moonen, M.; Catrysse, M.; and Jensen, S. H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951959,\n  author = {G. Vairetti and T. {van Waterschoot} and M. Moonen and M. Catrysse and S. H. Jensen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse linear parametric modeling of room acoustics with Orthonormal Basis Functions},\n  year = {2014},\n  pages = {1-5},\n  abstract = {Orthonormal Basis Function (OBF) models provide a stable and well-conditioned representation of a linear system. When used for the modeling of room acoustics, useful information about the true dynamics of the system can be introduced by a proper selection of a set of poles, which however appear non-linearly in the model. A novel method for selecting the poles is proposed, which bypass the non-linear problem by exploiting the concept of sparsity and by using convex optimization. The model obtained has a longer impulse response compared to the all-zero model with the same number of parameters, without introducing substantial error in the early response. The method also allows to increase the resolution in a specified frequency region, while still being able to approximate the spectral envelope in other regions.},\n  keywords = {architectural acoustics;convex programming;convex optimization;nonlinear problem;orthonormal basis functions;room acoustics;sparse linear parametric modeling;Approximation methods;Acoustics;Vectors;Accuracy;Resonant frequency;Convex functions;Finite impulse response filters;Parametric models;Orthonormal Basis Functions;Kautz filter;Room acoustics;LASSO},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921729.pdf},\n}\n\n
\n
\n\n\n
\n Orthonormal Basis Function (OBF) models provide a stable and well-conditioned representation of a linear system. When used for the modeling of room acoustics, useful information about the true dynamics of the system can be introduced by a proper selection of a set of poles, which however appear non-linearly in the model. A novel method for selecting the poles is proposed, which bypass the non-linear problem by exploiting the concept of sparsity and by using convex optimization. The model obtained has a longer impulse response compared to the all-zero model with the same number of parameters, without introducing substantial error in the early response. The method also allows to increase the resolution in a specified frequency region, while still being able to approximate the spectral envelope in other regions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A dynamic screening principle for the Lasso.\n \n \n \n \n\n\n \n Bonnefoy, A.; Emiya, V.; Ralaivola, L.; and Gribonval, R.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 6-10, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951960,\n  author = {A. Bonnefoy and V. Emiya and L. Ralaivola and R. Gribonval},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A dynamic screening principle for the Lasso},\n  year = {2014},\n  pages = {6-10},\n  abstract = {The Lasso is an optimization problem devoted to finding a sparse representation of some signal with respect to a predefined dictionary. An original and computationally-efficient method is proposed here to solve this problem, based on a dynamic screening principle. It makes it possible to accelerate a large class of optimization algorithms by iteratively reducing the size of the dictionary during the optimization process, discarding elements that are provably known not to belong to the solution of the Lasso. The iterative reduction of the dictionary is what we call dynamic screening. As this screening step is inexpensive, the computational cost of the algorithm using our dynamic screening strategy is lower than that of the base algorithm. Numerical experiments on synthetic and real data support the relevance of this approach.},\n  keywords = {iterative methods;optimisation;signal representation;dynamic screening principle;Lasso;optimization problem;sparse signal representation;predefined dictionary;computationally-efficient method;computational cost;iterative reduction;Abstracts;Optimization;Screening test;Dynamic screening;Lasso;First-order algorithms;ISTA},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925233.pdf},\n}\n\n
\n
\n\n\n
\n The Lasso is an optimization problem devoted to finding a sparse representation of some signal with respect to a predefined dictionary. An original and computationally-efficient method is proposed here to solve this problem, based on a dynamic screening principle. It makes it possible to accelerate a large class of optimization algorithms by iteratively reducing the size of the dictionary during the optimization process, discarding elements that are provably known not to belong to the solution of the Lasso. The iterative reduction of the dictionary is what we call dynamic screening. As this screening step is inexpensive, the computational cost of the algorithm using our dynamic screening strategy is lower than that of the base algorithm. Numerical experiments on synthetic and real data support the relevance of this approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Locally linear embedding-based prediction for 3D holoscopic image coding using HEVC.\n \n \n \n \n\n\n \n Lucas, L. F. R.; Conti, C.; Nunes, P.; Soares, L. D.; Rodrigues, N. M. M.; Pagliari, C. L.; da Silva , E. A. B.; and de Faria , S. M. M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 11-15, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"LocallyPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951961,\n  author = {L. F. R. Lucas and C. Conti and P. Nunes and L. D. Soares and N. M. M. Rodrigues and C. L. Pagliari and E. A. B. {da Silva} and S. M. M. {de Faria}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Locally linear embedding-based prediction for 3D holoscopic image coding using HEVC},\n  year = {2014},\n  pages = {11-15},\n  abstract = {Holoscopic imaging is a prospective acquisition and display solution for providing true 3D content and fatigue-free 3D visualization. However, efficient coding schemes for this particular type of content are needed to enable proper storage and delivery of the large amount of data involved in these systems. Therefore, this paper proposes an alternative HEVC-based coding scheme for efficient representation of holoscopic images. In this scheme, some directional intra prediction modes of the HEVC are replaced by a more efficient prediction framework based on locally linear embedding techniques. Experimental results show the advantage of the proposed prediction for 3D holoscopic image coding, compared to the reference HEVC standard as well as previously presented approaches in this field.},\n  keywords = {video coding;locally linear embedding-based prediction;3D holoscopic image coding;high efficiency video coding;3D content;fatigue-free 3D visualization;directional intraprediction modes;reference HEVC standard;Three-dimensional displays;Image coding;Encoding;Video coding;Imaging;Standards;Rate-distortion;3D holoscopic image coding;image prediction;locally linear embedding;HEVC},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925045.pdf},\n}\n\n
\n
\n\n\n
\n Holoscopic imaging is a prospective acquisition and display solution for providing true 3D content and fatigue-free 3D visualization. However, efficient coding schemes for this particular type of content are needed to enable proper storage and delivery of the large amount of data involved in these systems. Therefore, this paper proposes an alternative HEVC-based coding scheme for efficient representation of holoscopic images. In this scheme, some directional intra prediction modes of the HEVC are replaced by a more efficient prediction framework based on locally linear embedding techniques. Experimental results show the advantage of the proposed prediction for 3D holoscopic image coding, compared to the reference HEVC standard as well as previously presented approaches in this field.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust linear regression analysis - The greedy way.\n \n \n \n \n\n\n \n Papageorgiou, G.; Bouboulis, P.; Theodoridis, S.; and Themelis, K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 16-20, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951962,\n  author = {G. Papageorgiou and P. Bouboulis and S. Theodoridis and K. Themelis},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Robust linear regression analysis - The greedy way},\n  year = {2014},\n  pages = {16-20},\n  abstract = {In this paper, the task of robust estimation in the presence of outliers is presented. Outliers are explicitly modeled by employing sparsity arguments. A novel efficient algorithm, based on the greedy Orthogonal Matching Pursuit (OMP) scheme, is derived. Theoretical results concerning the recovery of the solution as well as simulation experiments, which verify the comparative advantages of the new technique, are discussed.},\n  keywords = {estimation theory;iterative methods;regression analysis;signal processing;time-frequency analysis;robust linear regression analysis;greedy way;robust estimation;outlier presence;sparsity arguments;greedy orthogonal matching pursuit scheme;OMP scheme;Robustness;Noise;Vectors;Matching pursuit algorithms;Complexity theory;Greedy algorithms;Optimization;Greedy Algorithm for Robust Denoising (GARD);Robust based Regression;Robust Least Squares;Greedy algorithms;Outlier detection},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925489.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, the task of robust estimation in the presence of outliers is presented. Outliers are explicitly modeled by employing sparsity arguments. A novel efficient algorithm, based on the greedy Orthogonal Matching Pursuit (OMP) scheme, is derived. Theoretical results concerning the recovery of the solution as well as simulation experiments, which verify the comparative advantages of the new technique, are discussed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Feature enhancement for robust speech recognition on smartphones with dual-microphone.\n \n \n \n \n\n\n \n López-Espejo, I.; Gomez, A. M.; González, J. A.; and Peinado, A. M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 21-25, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"FeaturePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951963,\n  author = {I. López-Espejo and A. M. Gomez and J. A. González and A. M. Peinado},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Feature enhancement for robust speech recognition on smartphones with dual-microphone},\n  year = {2014},\n  pages = {21-25},\n  abstract = {Latest smartphones often have more than one microphone in order to perform noise reduction. Although research on speech enhancement is already exploiting this new feature, robust speech recognition is not still benefiting from it. In this paper we propose two feature enhancement methods especially developed for the case of a smartphone with a dual-microphone operating in an adverse acoustic environment. In order to test these proposals, we have already developed a new experimental framework which includes a noisy speech database (based on AURORA2) which emulates the acquisition of dual-microphone data. Our experimental results show a clear improvement in terms of word accuracy in comparison with both using a power level difference-based speech enhancement algorithm and a single channel feature compensation approach.},\n  keywords = {microphones;smart phones;speech enhancement;speech recognition;single-channel feature compensation approach;power level difference-based speech enhancement algorithm;word accuracy;dual-microphone data acquisition;AURORA2;noisy speech database;adverse acoustic environment;feature enhancement method;speech enhancement;noise reduction;dual-microphone smartphone;robust speech recognition;feature enhancement;Speech;Noise;Speech recognition;Microphones;Speech enhancement;Noise measurement;Smart phones;Dual-microphone;Robust speech recognition;Feature enhancement;Smartphone;AURORA2-2C},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922257.pdf},\n}\n\n
\n
\n\n\n
\n Latest smartphones often have more than one microphone in order to perform noise reduction. Although research on speech enhancement is already exploiting this new feature, robust speech recognition is not still benefiting from it. In this paper we propose two feature enhancement methods especially developed for the case of a smartphone with a dual-microphone operating in an adverse acoustic environment. In order to test these proposals, we have already developed a new experimental framework which includes a noisy speech database (based on AURORA2) which emulates the acquisition of dual-microphone data. Our experimental results show a clear improvement in terms of word accuracy in comparison with both using a power level difference-based speech enhancement algorithm and a single channel feature compensation approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A spatially constrained low-rank matrix factorization for the functional parcellation of the brain.\n \n \n \n\n\n \n Benichoux, A.; and Blumensath, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 26-30, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951964,\n  author = {A. Benichoux and T. Blumensath},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A spatially constrained low-rank matrix factorization for the functional parcellation of the brain},\n  year = {2014},\n  pages = {26-30},\n  abstract = {We propose a new matrix recovery framework to partition brain activity using time series of resting-state functional Magnetic Resonance Imaging (fMRI). Spatial clusters are obtained with a new low-rank factorization algorithm that offers the ability to add different types of constraints. As an example we add a total variation type cost function in order to exploit neighborhood constraints. We first validate the performance of our algorithm on simulated data, which allows us to show that the neighborhood constraint improves the recovery in noisy or undersampled set-ups. Then we conduct experiments on real-world data, where we simulated an accelerated acquisition by randomly undersampling the time series. The obtained parcellation are reproducible when analysing data from different sets of individuals, and the estimation is robust to undersampling.},\n  keywords = {biomedical MRI;brain;matrix decomposition;medical image processing;pattern clustering;time series;real-world data;accelerated acquisition;neighborhood constraints;total variation type cost function;spatial clusters;fMRI;resting-state functional magnetic resonance imaging;time series;brain activity;matrix recovery framework;functional parcellation;spatially constrained low-rank matrix factorization;Clustering algorithms;Sparse matrices;Matrix decomposition;Smoothing methods;Time series analysis;Optimization;Approximation methods;Clustering;Low-rank;Sparse;Matrix recovery;Brain parcellation;Neuroimaging;fMRI},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n We propose a new matrix recovery framework to partition brain activity using time series of resting-state functional Magnetic Resonance Imaging (fMRI). Spatial clusters are obtained with a new low-rank factorization algorithm that offers the ability to add different types of constraints. As an example we add a total variation type cost function in order to exploit neighborhood constraints. We first validate the performance of our algorithm on simulated data, which allows us to show that the neighborhood constraint improves the recovery in noisy or undersampled set-ups. Then we conduct experiments on real-world data, where we simulated an accelerated acquisition by randomly undersampling the time series. The obtained parcellation are reproducible when analysing data from different sets of individuals, and the estimation is robust to undersampling.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Model selection for hemodynamic brain parcellation in FMRI.\n \n \n \n \n\n\n \n Albughdadi, M.; Chaari, L.; Forbes, F.; Tourneret, J.; and Ciuciu, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 31-35, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ModelPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951965,\n  author = {M. Albughdadi and L. Chaari and F. Forbes and J. Tourneret and P. Ciuciu},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Model selection for hemodynamic brain parcellation in FMRI},\n  year = {2014},\n  pages = {31-35},\n  abstract = {Brain parcellation into a number of hemodynamically homogeneous regions (parcels) is a challenging issue in fMRI analyses. This task has been recently integrated in the joint detection estimation [1] resulting in the so-called joint parcellation detection estimation (JPDE) model [2]. JPDE automatically estimates the parcels from the fMRI data but requires the desired number of parcels to be fixed. This is potentially critical in that the chosen number of parcels may influence detection-estimation performance. In this paper, we propose a model selection procedure to automatically set the number of parcels from the data. The selection procedure relies on the calculation of the free energy corresponding to each concurrent model, within the variational expectation maximization framework. Experiments on synthetic and real fMRI data demonstrate the ability of the proposed procedure to select the optimal number of parcels.},\n  keywords = {biomedical MRI;brain;expectation-maximisation algorithm;feature extraction;feature selection;free energy;haemodynamics;medical image processing;neurophysiology;optimisation;parameter estimation;variational techniques;optimal parcel number selection;variational expectation maximization framework;free energy calculation;automatic parcel number setting;detection-estimation performance;parcel number selection effect;constant desired parcel number;automatic parcel estimation;JPDE model;joint parcellation detection estimation model;joint detection estimation;hemodynamically homogeneous regions;FMRI analysis;hemodynamic brain parcellation;model selection;Data models;Brain modeling;Hemodynamics;Joints;Estimation;Bayes methods;Educational institutions;fMRI;JDE;JPDE;Parcellation;VEM},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926613.pdf},\n}\n\n
\n
\n\n\n
\n Brain parcellation into a number of hemodynamically homogeneous regions (parcels) is a challenging issue in fMRI analyses. This task has been recently integrated in the joint detection estimation [1] resulting in the so-called joint parcellation detection estimation (JPDE) model [2]. JPDE automatically estimates the parcels from the fMRI data but requires the desired number of parcels to be fixed. This is potentially critical in that the chosen number of parcels may influence detection-estimation performance. In this paper, we propose a model selection procedure to automatically set the number of parcels from the data. The selection procedure relies on the calculation of the free energy corresponding to each concurrent model, within the variational expectation maximization framework. Experiments on synthetic and real fMRI data demonstrate the ability of the proposed procedure to select the optimal number of parcels.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dynamical analysis of brain seizure activity from EEG signals.\n \n \n \n \n\n\n \n Amini, L.; Jutten, C.; Pouyatos, B.; Depaulis, A.; and Roucard, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 36-40, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DynamicalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951966,\n  author = {L. Amini and C. Jutten and B. Pouyatos and A. Depaulis and C. Roucard},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Dynamical analysis of brain seizure activity from EEG signals},\n  year = {2014},\n  pages = {36-40},\n  abstract = {A sudden emergence of seizure activity on a normal background EEG can be seen from visual inspection of the intracranial EEG (iEEG) recordings of Genetic Absence Epilepsy Rat from Strasbourg (GAERS). We observe that most of the recording channels from different brain regions display seizure activity. We wonder if the brain behavior changes within a given seizure. Using source separation methods on temporal sliding windows, we develop a map of dynamic behavior to study this dynamicity. The map is built by computing the correlation functions between the main sources extracted in different time windows. The proposed method is applied on iEEG of four GAERS. We see that the behavior of brain changes about 0.5s - 1.5s after onset when the relevant temporal sources become very similar. The corresponding spatial maps for each time window shows that the seizure activity starts from a focus and propagates quickly.},\n  keywords = {blind source separation;electroencephalography;medical disorders;medical signal processing;dynamical analysis;brain seizure activity;EEG signals;visual inspection;intracranial EEG recordings;genetic absence epilepsy rat from strasbourg;GAERS;source separation methods;temporal sliding windows;dynamic behavior;spatial maps;iEEG recordings;Electroencephalography;Abstracts;Shape;Source Separation;Dynamic Analysis;Intracranial EEG;Seizure;Absence Epilepsy},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922899.pdf},\n}\n\n
\n
\n\n\n
\n A sudden emergence of seizure activity on a normal background EEG can be seen from visual inspection of the intracranial EEG (iEEG) recordings of Genetic Absence Epilepsy Rat from Strasbourg (GAERS). We observe that most of the recording channels from different brain regions display seizure activity. We wonder if the brain behavior changes within a given seizure. Using source separation methods on temporal sliding windows, we develop a map of dynamic behavior to study this dynamicity. The map is built by computing the correlation functions between the main sources extracted in different time windows. The proposed method is applied on iEEG of four GAERS. We see that the behavior of brain changes about 0.5s - 1.5s after onset when the relevant temporal sources become very similar. The corresponding spatial maps for each time window shows that the seizure activity starts from a focus and propagates quickly.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast, variation-based methods for the analysis of extended brain sources.\n \n \n \n \n\n\n \n Becker, H.; Albera, L.; Comon, P.; Gribonval, R.; and Merlet, I.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 41-45, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Fast,Paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951967,\n  author = {H. Becker and L. Albera and P. Comon and R. Gribonval and I. Merlet},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Fast, variation-based methods for the analysis of extended brain sources},\n  year = {2014},\n  pages = {41-45},\n  abstract = {Identifying the location and spatial extent of several highly correlated and simultaneously active brain sources from electroencephalographic (EEG) recordings and extracting the corresponding brain signals is a challenging problem. In a recent comparison of source imaging techniques, the VB-SCCD algorithm, which exploits the sparsity of the variational map of the sources, proved to be a promising approach. In this paper, we propose several ways to improve this method. In order to adjust the size of the estimated sources, we add a regularization term that imposes sparsity in the original source domain. Furthermore, we demonstrate the application of ADMM, which permits to efficiently solve the optimization problem. Finally, we also consider the exploitation of the temporal structure of the data by employing L1;2-norm regularization. The performance of the resulting algorithm, called L1;2-SVB-SCCD, is evaluated based on realistic simulations in comparison to VB-SCCD and several state-of-the-art techniques for extended source localization.},\n  keywords = {electroencephalography;medical image processing;optimisation;L1;2-SVB-SCCD;L1;2-norm regularization;temporal structure;optimization problem;ADMM;source domain;VB-SCCD algorithm;source imaging;brain signals;EEG recordings;electroencephalographic recordings;simultaneously active brain sources;correlated active brain sources;spatial extent;location ldentification;variation-based methods;extended brain source analysis;Electroencephalography;Imaging;Optimization;Brain modeling;Image reconstruction;Correlation;Robustness;EEG;extended source localization;ADMM;sparsity},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923397.pdf},\n}\n\n
\n
\n\n\n
\n Identifying the location and spatial extent of several highly correlated and simultaneously active brain sources from electroencephalographic (EEG) recordings and extracting the corresponding brain signals is a challenging problem. In a recent comparison of source imaging techniques, the VB-SCCD algorithm, which exploits the sparsity of the variational map of the sources, proved to be a promising approach. In this paper, we propose several ways to improve this method. In order to adjust the size of the estimated sources, we add a regularization term that imposes sparsity in the original source domain. Furthermore, we demonstrate the application of ADMM, which permits to efficiently solve the optimization problem. Finally, we also consider the exploitation of the temporal structure of the data by employing L1;2-norm regularization. The performance of the resulting algorithm, called L1;2-SVB-SCCD, is evaluated based on realistic simulations in comparison to VB-SCCD and several state-of-the-art techniques for extended source localization.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Singular spectrum analysis as a preprocessing filtering step for fNIRS brain computer interfaces.\n \n \n \n \n\n\n \n Spyrou, L.; Blokland, Y.; Farquhar, J.; and Bruhn, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 46-50, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SingularPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951968,\n  author = {L. Spyrou and Y. Blokland and J. Farquhar and J. Bruhn},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Singular spectrum analysis as a preprocessing filtering step for fNIRS brain computer interfaces},\n  year = {2014},\n  pages = {46-50},\n  abstract = {Near Infrared Spectroscopy is a method that measures the brain's haemodynamic response. It is of interest in brain-computer interfaces where haemodynamic patterns in motor tasks are exploited to detect movement. However, the NIRS signal is usually corrupted with background biological processes, some of which are periodic or quasi-periodic in nature. Singular spectrum analysis (SSA) is a time-series decomposition method which separates a signal into a trend, oscillatory components and noise with minimal prior assumptions about their nature. Due to the frequency spectrum overlap of the movement response and of background processes such as Mayer waves, spectral filters are usually suboptimal. In this study, we perform SSA both in an online and a block fashion resulting in the removal of periodic components and in increased classification performance. Our study indicates that SSA is a practical method that can replace spectral filtering and is evaluated on healthy participants and patients with tetraplegia.},\n  keywords = {brain-computer interfaces;haemodynamics;medical image processing;singular spectrum analysis;preprocessing filtering step;fNIRS brain computer interfaces;near infrared spectroscopy;brain haemodynamic response;haemodynamic patterns;motor tasks;NIRS signal;background biological processes;SSA;time-series decomposition method;frequency spectrum;movement response;block fashion;periodic components;classification performance;spectral filtering;tetraplegia;Market research;Time series analysis;Heart rate;Image reconstruction;Spectral analysis;Noise;Matrix decomposition;Singular spectrum analysis;NIRS;BCI},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926897.pdf},\n}\n\n
\n
\n\n\n
\n Near Infrared Spectroscopy is a method that measures the brain's haemodynamic response. It is of interest in brain-computer interfaces where haemodynamic patterns in motor tasks are exploited to detect movement. However, the NIRS signal is usually corrupted with background biological processes, some of which are periodic or quasi-periodic in nature. Singular spectrum analysis (SSA) is a time-series decomposition method which separates a signal into a trend, oscillatory components and noise with minimal prior assumptions about their nature. Due to the frequency spectrum overlap of the movement response and of background processes such as Mayer waves, spectral filters are usually suboptimal. In this study, we perform SSA both in an online and a block fashion resulting in the removal of periodic components and in increased classification performance. Our study indicates that SSA is a practical method that can replace spectral filtering and is evaluated on healthy participants and patients with tetraplegia.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Representation of spectral envelope with warped frequency resolution for audio coder.\n \n \n \n \n\n\n \n Sugiura, R.; Kamamoto, Y.; Harada, N.; Kameoka, H.; and Moriya, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 51-55, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RepresentationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951969,\n  author = {R. Sugiura and Y. Kamamoto and N. Harada and H. Kameoka and T. Moriya},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Representation of spectral envelope with warped frequency resolution for audio coder},\n  year = {2014},\n  pages = {51-55},\n  abstract = {We have devised a method for representing frequency spectral envelopes with warped frequency resolution based on sparse non-negative matrices aiming at its use for frequency domain audio coding. With optimally prepared matrices, we can selectively control the resolution of spectral envelopes and enhance the coding efficiency. We show that the devised method can enhance the subjective quality of the state-of-the-art wide-band coder at 16 kbit/s at a cost of minor additional complexity. The method is therefore, expected to be useful for low-bit-rate and low-delay audio coder for mobile communications.},\n  keywords = {audio coding;frequency-domain analysis;sparse matrices;spectral envelope representation;warped frequency resolution;sparse nonnegative matrices;frequency domain audio coding;coding efficiency enhancement;wideband coder;low-bit-rate audio coder;low-delay audio coder;mobile communications;Speech;Frequency-domain analysis;Speech coding;Quantization (signal);Audio coding;Sparse matrices;audio coding;signal processing;frequency warping;non-negative matrix;TCX},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923071.pdf},\n}\n\n
\n
\n\n\n
\n We have devised a method for representing frequency spectral envelopes with warped frequency resolution based on sparse non-negative matrices aiming at its use for frequency domain audio coding. With optimally prepared matrices, we can selectively control the resolution of spectral envelopes and enhance the coding efficiency. We show that the devised method can enhance the subjective quality of the state-of-the-art wide-band coder at 16 kbit/s at a cost of minor additional complexity. The method is therefore, expected to be useful for low-bit-rate and low-delay audio coder for mobile communications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Direct linear conversion of LSP parameters for perceptual control in speech and audio coding.\n \n \n \n \n\n\n \n Sugiura, R.; Kamamoto, Y.; Harada, N.; Kameoka, H.; and Moriya, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 56-60, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DirectPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951970,\n  author = {R. Sugiura and Y. Kamamoto and N. Harada and H. Kameoka and T. Moriya},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Direct linear conversion of LSP parameters for perceptual control in speech and audio coding},\n  year = {2014},\n  pages = {56-60},\n  abstract = {We have devised a direct and simple scheme for linear conversion of line spectrum pairs (LSP) with low computational complexity aiming at weighting or inverse weighting spectral envelopes for noise control in speech and audio coders. Using optimally prepared coefficients, we can perform the conversion directly in the LSP domain, which ensures low computational costs and also simplifies the check or the modification of unstable parameters. We show that this method performs the same as the weighting in the linear prediction coding domain but with lower complexity in a low-bit-rate situation. The devised method is therefore expected to be useful for low-bit-rate speech and audio coders for mobile communications.},\n  keywords = {audio coding;computational complexity;linear predictive coding;speech coding;vocoders;mobile communications;low-bit-rate speech audio coders;low-bit-rate speech coders;linear prediction coding;noise control;inverse weighting spectral envelope;computational complexity;line spectrum pairs;audio coding;speech coding;perceptual control;LSP parameters;direct linear conversion;Speech;Speech coding;Audio coding;Quantization (signal);Vectors;Noise;Computational complexity;audio coding;signal processing;LSP;linear approximation;TCX},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923123.pdf},\n}\n\n
\n
\n\n\n
\n We have devised a direct and simple scheme for linear conversion of line spectrum pairs (LSP) with low computational complexity aiming at weighting or inverse weighting spectral envelopes for noise control in speech and audio coders. Using optimally prepared coefficients, we can perform the conversion directly in the LSP domain, which ensures low computational costs and also simplifies the check or the modification of unstable parameters. We show that this method performs the same as the weighting in the linear prediction coding domain but with lower complexity in a low-bit-rate situation. The devised method is therefore expected to be useful for low-bit-rate speech and audio coders for mobile communications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Maximum likelihood based multi-channel isotropic reverberation reduction for hearing aids.\n \n \n \n \n\n\n \n Kuklasiński, A.; Doclo, S.; Jensen, S. H.; and Jensen, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 61-65, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"MaximumPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951971,\n  author = {A. Kuklasiński and S. Doclo and S. H. Jensen and J. Jensen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Maximum likelihood based multi-channel isotropic reverberation reduction for hearing aids},\n  year = {2014},\n  pages = {61-65},\n  abstract = {We propose a multi-channel Wiener filter for speech dereverberation in hearing aids. The proposed algorithm uses joint maximum likelihood estimation of the speech and late reverberation spectral variances, under the assumption that the late reverberant sound field is cylindrically isotropic. The dereverberation performance of the algorithm is evaluated using computer simulations with realistic hearing aid microphone signals including head-related effects. The algorithm is shown to work well with signals reverberated both by synthetic and by measured room impulse responses, achieving improvements in the order of 0.5 PESQ points and 5 dB frequency-weighted segmental SNR.},\n  keywords = {maximum likelihood estimation;microphones;reverberation;speech processing;Wiener filters;multichannel isotropic reverberation reduction;hearing aids;multichannel Wiener filter;speech dereverberation;joint maximum likelihood estimation;dereverberation performance;realistic hearing aid microphone signals;head related effects;Speech;Reverberation;Covariance matrices;Interference;Microphones;Maximum likelihood estimation;Noise;multi-channel wiener filter;maximum likelihood;speech dereverberation;isotropic;hearing aids},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925137.pdf},\n}\n\n
\n
\n\n\n
\n We propose a multi-channel Wiener filter for speech dereverberation in hearing aids. The proposed algorithm uses joint maximum likelihood estimation of the speech and late reverberation spectral variances, under the assumption that the late reverberant sound field is cylindrically isotropic. The dereverberation performance of the algorithm is evaluated using computer simulations with realistic hearing aid microphone signals including head-related effects. The algorithm is shown to work well with signals reverberated both by synthetic and by measured room impulse responses, achieving improvements in the order of 0.5 PESQ points and 5 dB frequency-weighted segmental SNR.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Elimination of impulsive disturbances from stereo audio recordings.\n \n \n \n \n\n\n \n Niedźwiecki, M.; and Ciolek, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 66-70, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EliminationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951972,\n  author = {M. Niedźwiecki and M. Ciolek},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Elimination of impulsive disturbances from stereo audio recordings},\n  year = {2014},\n  pages = {66-70},\n  abstract = {This paper presents a new approach to elimination of impulsive disturbances from stereo audio recordings. The proposed solution is based on vector autoregressive modeling of audio signals. On-line tracking of signal model parameters is performed using the stability-preserving Whittle-Wiggins-Robinson algorithm with exponential data weighting. Detection of noise pulses and model-based interpolation of the irrevocably distorted samples is realized using an adaptive, variable-order Kalman filter. The proposed approach is evaluated on a set of clean audio signals contaminated with real click waveforms extracted from silent parts of old gramophone recordings.},\n  keywords = {audio recording;audio signal processing;autoregressive processes;interpolation;Kalman filters;gramophone recordings;real click waveforms;adaptive variable-order Kalman filter;irrevocably-distorted samples;model-based interpolation;noise pulse detection;exponential data weighting;stability-preserving Whittle-Wiggins-Robinson algorithm;signal model parameters;online tracking;audio signals;vector autoregressive modeling;stereo audio recordings;impulsive disturbance elimination;Noise;Reactive power;Kalman filters;Vectors;Interpolation;Mathematical model;Estimation;Elimination of impulsive disturbances},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921125.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a new approach to elimination of impulsive disturbances from stereo audio recordings. The proposed solution is based on vector autoregressive modeling of audio signals. On-line tracking of signal model parameters is performed using the stability-preserving Whittle-Wiggins-Robinson algorithm with exponential data weighting. Detection of noise pulses and model-based interpolation of the irrevocably distorted samples is realized using an adaptive, variable-order Kalman filter. The proposed approach is evaluated on a set of clean audio signals contaminated with real click waveforms extracted from silent parts of old gramophone recordings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Implementation and evaluation of the Vandermonde transform.\n \n \n \n \n\n\n \n Bäckström, T.; Fischer, J.; and Boley, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 71-75, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ImplementationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951993,\n  author = {T. Bäckström and J. Fischer and D. Boley},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Implementation and evaluation of the Vandermonde transform},\n  year = {2014},\n  pages = {71-75},\n  abstract = {The Vandermonde transform was recently presented as a time-frequency transform which, in difference to the discrete Fourier transform, also decorrelates the signal. Although the approximate or asymptotic decorrelation provided by Fourier is sufficient in many cases, its performance is inadequate in applications which employ short windows. The Vandermonde transform will therefore be useful in speech and audio processing applications, which have to use short analysis windows because the input signal varies rapidly over time. Such applications are often used on mobile devices with limited computational capacity, whereby efficient computations are of paramount importance. Implementation of the Vandermonde transform has, however, turned out to be a considerable effort: it requires advanced numerical tools whose performance is optimized for complexity and accuracy. This contribution provides a baseline solution to this task including a performance evaluation.},\n  keywords = {discrete Fourier transforms;signal processing;time-frequency analysis;Toeplitz matrices;Vandermonde transform;time-frequency transform;discrete Fourier transform;asymptotic decorrelation;approximate decorrelation;short analysis windows;performance evaluation;Toeplitz matrix;Complexity theory;Accuracy;Discrete Fourier transforms;Decorrelation;MATLAB;Symmetric matrices;time-frequency transforms;decorrelation;Vandermonde matrix;Toeplitz matrix;warped discrete Fourier transform},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569906583.pdf},\n}\n\n
\n
\n\n\n
\n The Vandermonde transform was recently presented as a time-frequency transform which, in difference to the discrete Fourier transform, also decorrelates the signal. Although the approximate or asymptotic decorrelation provided by Fourier is sufficient in many cases, its performance is inadequate in applications which employ short windows. The Vandermonde transform will therefore be useful in speech and audio processing applications, which have to use short analysis windows because the input signal varies rapidly over time. Such applications are often used on mobile devices with limited computational capacity, whereby efficient computations are of paramount importance. Implementation of the Vandermonde transform has, however, turned out to be a considerable effort: it requires advanced numerical tools whose performance is optimized for complexity and accuracy. This contribution provides a baseline solution to this task including a performance evaluation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Aspects of favorable propagation in Massive MIMO.\n \n \n \n \n\n\n \n Ngo, H. Q.; Larsson, E. G.; and Marzetta, T. L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 76-80, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AspectsPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951994,\n  author = {H. Q. Ngo and E. G. Larsson and T. L. Marzetta},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Aspects of favorable propagation in Massive MIMO},\n  year = {2014},\n  pages = {76-80},\n  abstract = {Favorable propagation, defined as mutual orthogonality among the vector-valued channels to the terminals, is one of the key properties of the radio channel that is exploited in Massive MIMO. However, there has been little work that studies this topic in detail. In this paper, we first show that favorable propagation offers the most desirable scenario in terms of maximizing the sum-capacity. One useful proxy for whether propagation is favorable or not is the channel condition number. However, this proxy is not good for the case where the norms of the channel vectors are not equal. For this case, to evaluate how favorable the propagation offered by the channel is, we propose a “distance from favorable propagation” measure, which is the gap between the sum-capacity and the maximum capacity obtained under favorable propagation. Secondly, we examine how favorable the channels can be for two extreme scenarios: i.i.d. Rayleigh fading and uniform random line-of-sight (UR-LoS). Both environments offer (nearly) favorable propagation. Furthermore, to analyze the UR-LoS model, we propose an urns-and-balls model. This model is simple and explains the singular value spread characteristic of the UR-LoS model well.},\n  keywords = {MIMO communication;optimisation;Rayleigh channels;vectors;massive MIMO;favorable propagation;vector-valued channels;sum-capacity maximization;singular value spread characteristic;UR-LoS model;Rayleigh channels;MIMO;Vectors;Reactive power;Antennas},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924505.pdf},\n}\n\n
\n
\n\n\n
\n Favorable propagation, defined as mutual orthogonality among the vector-valued channels to the terminals, is one of the key properties of the radio channel that is exploited in Massive MIMO. However, there has been little work that studies this topic in detail. In this paper, we first show that favorable propagation offers the most desirable scenario in terms of maximizing the sum-capacity. One useful proxy for whether propagation is favorable or not is the channel condition number. However, this proxy is not good for the case where the norms of the channel vectors are not equal. For this case, to evaluate how favorable the propagation offered by the channel is, we propose a “distance from favorable propagation” measure, which is the gap between the sum-capacity and the maximum capacity obtained under favorable propagation. Secondly, we examine how favorable the channels can be for two extreme scenarios: i.i.d. Rayleigh fading and uniform random line-of-sight (UR-LoS). Both environments offer (nearly) favorable propagation. Furthermore, to analyze the UR-LoS model, we propose an urns-and-balls model. This model is simple and explains the singular value spread characteristic of the UR-LoS model well.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Channel estimation for millimeter-wave Very-Large MIMO systems.\n \n \n \n\n\n \n Araújo, D. C.; de Almeida , A. L. F.; Axnäs, J.; and Mota, J. C. M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 81-85, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951995,\n  author = {D. C. Araújo and A. L. F. {de Almeida} and J. Axnäs and J. C. M. Mota},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Channel estimation for millimeter-wave Very-Large MIMO systems},\n  year = {2014},\n  pages = {81-85},\n  abstract = {We present an efficient pilot-assisted technique for downlink channel estimation in Very Large MIMO (VL-MIMO) systems operating in a 60 GHz indoor channel. Our estimator exploits the inherent sparsity of the channel and requires quite low pilot overhead. It is based on a coarse estimation stage that capitalizes on compressed sensing, followed by a refinement stage to find the transmit/receive spatial frequencies. Considering a ray-tracing channel model, the system throughput is evaluated from computer simulations by considering different beamforming schemes designed from the estimated channel. Our results show that the proposed channel estimator performs quite well with very low pilot overhead.},\n  keywords = {array signal processing;channel estimation;compressed sensing;millimetre wave propagation;MIMO communication;wireless channels;channel estimation;millimeter-wave very-large MIMO system;pilot-assisted technique;downlink channel estimation;indoor channel;coarse estimation stage;compressed sensing;ray-tracing channel model;frequency 60 GHz;OFDM;Channel estimation;Array signal processing;Vectors;Estimation;Downlink;Frequency estimation;Massive MIMO systems;millimeter-wave communications;compressive sensing;channel estimation},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n We present an efficient pilot-assisted technique for downlink channel estimation in Very Large MIMO (VL-MIMO) systems operating in a 60 GHz indoor channel. Our estimator exploits the inherent sparsity of the channel and requires quite low pilot overhead. It is based on a coarse estimation stage that capitalizes on compressed sensing, followed by a refinement stage to find the transmit/receive spatial frequencies. Considering a ray-tracing channel model, the system throughput is evaluated from computer simulations by considering different beamforming schemes designed from the estimated channel. Our results show that the proposed channel estimator performs quite well with very low pilot overhead.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n CHEMP receiver for large-scale multiuser MIMO systems using spatial modulation.\n \n \n \n \n\n\n \n Narasimhan, T. L.; and Chockalingam, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 86-90, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CHEMPPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951996,\n  author = {T. L. Narasimhan and A. Chockalingam},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {CHEMP receiver for large-scale multiuser MIMO systems using spatial modulation},\n  year = {2014},\n  pages = {86-90},\n  abstract = {In spatial modulation (SM), information bits are conveyed through the index of the active transmit antenna in addition to the information bits conveyed through conventional modulation symbols. In this paper, we propose a receiver for large-scale multiuser spatial modulation MIMO (SM-MIMO) systems. The proposed receiver exploits the channel hardening phenomenon observed in large-dimensional MIMO channels. It works with a matched filtered system model. On this system model, it obtains an estimate of the matched filtered channel matrix (rather than the channel matrix itself) and uses this estimate for detecting the data. The data detection is done using an approximate message passing algorithm. The proposed receiver, referred to as the channel hardening-exploiting message passing receiver for SM(CHEMP-SM), is shown to achieve very good performance at low complexity.},\n  keywords = {antenna arrays;channel estimation;matched filters;matrix algebra;message passing;MIMO communication;modulation;multiuser detection;radio receivers;transmitting antennas;CHEMP receiver;spatial modulation;active transmit antenna;information bits;modulation symbols;SM-MIMO systems;large-scale multiuser spatial modulation MIMO systems;channel hardening phenomenon;large-dimensional MIMO channels;matched filtered system model;matched filtered channel matrix estimation;data detection;approximate message passing algorithm;Receivers;Complexity theory;Detectors;MIMO;Transmitting antennas;Channel estimation;Radio frequency;Large-scale MIMO systems;spatial modulation;SM-MIMO;message passing;channel hardening},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925267.pdf},\n}\n\n
\n
\n\n\n
\n In spatial modulation (SM), information bits are conveyed through the index of the active transmit antenna in addition to the information bits conveyed through conventional modulation symbols. In this paper, we propose a receiver for large-scale multiuser spatial modulation MIMO (SM-MIMO) systems. The proposed receiver exploits the channel hardening phenomenon observed in large-dimensional MIMO channels. It works with a matched filtered system model. On this system model, it obtains an estimate of the matched filtered channel matrix (rather than the channel matrix itself) and uses this estimate for detecting the data. The data detection is done using an approximate message passing algorithm. The proposed receiver, referred to as the channel hardening-exploiting message passing receiver for SM(CHEMP-SM), is shown to achieve very good performance at low complexity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hardware realizable lattice-reduction-aided detectors for large-scale MIMO systems.\n \n \n \n \n\n\n \n Zhou, Q.; and Ma, X.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 91-95, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"HardwarePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951997,\n  author = {Q. Zhou and X. Ma},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Hardware realizable lattice-reduction-aided detectors for large-scale MIMO systems},\n  year = {2014},\n  pages = {91-95},\n  abstract = {Because of their lower complexity and better error performance over K-best detectors, lattice-reduction (LR)-aided K-best detectors have recently proposed for large-scale multiinput multi-output (MIMO) detection. Among existing LR-aided K-best detectors, complex LR-aided K-best detector is more attractive compared to its real counterpart due to its potential lower latency and resources. However, one main difficulty in hardware implementation of complex LR-aided K-best is to efficiently find top K children of each layer in complex domain. In this paper, we propose and implement an LR-aided K-best algorithm that efficiently finds top K children in each layer when K is relatively small. Our implementation results on Xilinx VC707 FPGA board show that, with the aid of LR, the proposed LR-aided K-best implementation can support 3 Gbps transmissions for 16×16 MIMO systems with 1024-QAM with about 2.7 dB loss to the maximum likelihood detector at bit-error rate 10-4.},\n  keywords = {error statistics;maximum likelihood detection;MIMO communication;hardware realizable lattice reduction aided detectors;large scale MIMO systems;MIMO detection;maximum likelihood detector;bit error rate;Detectors;MIMO;Hardware;Field programmable gate arrays;Throughput;Signal processing algorithms;Lattices;Lattice reduction;large-scale MIMO;K-best algorithm;field-programmable gate array;very-large-scale integration},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925509.pdf},\n}\n\n
\n
\n\n\n
\n Because of their lower complexity and better error performance over K-best detectors, lattice-reduction (LR)-aided K-best detectors have recently proposed for large-scale multiinput multi-output (MIMO) detection. Among existing LR-aided K-best detectors, complex LR-aided K-best detector is more attractive compared to its real counterpart due to its potential lower latency and resources. However, one main difficulty in hardware implementation of complex LR-aided K-best is to efficiently find top K children of each layer in complex domain. In this paper, we propose and implement an LR-aided K-best algorithm that efficiently finds top K children in each layer when K is relatively small. Our implementation results on Xilinx VC707 FPGA board show that, with the aid of LR, the proposed LR-aided K-best implementation can support 3 Gbps transmissions for 16×16 MIMO systems with 1024-QAM with about 2.7 dB loss to the maximum likelihood detector at bit-error rate 10-4.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Iterative detection and decoding in 3GPP LTE-based massive MIMO systems.\n \n \n \n \n\n\n \n Wu, M.; Dick, C.; Cavallaro, J. R.; and Studer, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 96-100, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"IterativePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951998,\n  author = {M. Wu and C. Dick and J. R. Cavallaro and C. Studer},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Iterative detection and decoding in 3GPP LTE-based massive MIMO systems},\n  year = {2014},\n  pages = {96-100},\n  abstract = {Massive multiple-input multiple-output (MIMO) is expected to be a key technology in next-generation multi-user cellular systems for achieving higher throughput and better link reliability than existing (small-scale) MIMO systems. In this work, we develop a novel, low-complexity iterative detection and decoding algorithm for single carrier frequency division multiple access (SC-FDMA)-based massive MIMO systems, such as future 3GPP LTE-based systems. The proposed algorithm combines a novel frequency-domain minimum mean-square error (FD-MMSE) equalization method with parallel interference cancellation (PIC), requires low computational complexity, and achieves near-optimal error-rate performance in 3GPP-LTE-based massive MIMO systems having only 2× more base-station antennas than users.},\n  keywords = {3G mobile communication;cellular radio;computational complexity;equalisers;frequency-domain analysis;interference suppression;iterative decoding;least mean squares methods;Long Term Evolution;MIMO communication;mobile antennas;radiofrequency interference;low-complexity iterative detection-and-decoding algorithm;3GPP LTE-based massive MIMO systems;massive multiple-input multiple-output system;next-generation multi-user cellular systems;throughput;link reliability;single carrier frequency division multiple access-based massive MIMO systems;SC-FDMA;frequency-domain minimum mean-square error equalization method;FD-MMSE;parallel interference cancellation;low computational complexity;near-optimal error-rate performance;base-station antennas;MIMO;Complexity theory;Antennas;Detectors;Decoding;Iterative decoding;Frequency-domain analysis},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924733.pdf},\n}\n\n
\n
\n\n\n
\n Massive multiple-input multiple-output (MIMO) is expected to be a key technology in next-generation multi-user cellular systems for achieving higher throughput and better link reliability than existing (small-scale) MIMO systems. In this work, we develop a novel, low-complexity iterative detection and decoding algorithm for single carrier frequency division multiple access (SC-FDMA)-based massive MIMO systems, such as future 3GPP LTE-based systems. The proposed algorithm combines a novel frequency-domain minimum mean-square error (FD-MMSE) equalization method with parallel interference cancellation (PIC), requires low computational complexity, and achieves near-optimal error-rate performance in 3GPP-LTE-based massive MIMO systems having only 2× more base-station antennas than users.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Challenges in multimodal data fusion.\n \n \n \n \n\n\n \n Lahat, D.; Adalý, T.; and Jutten, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 101-105, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ChallengesPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6951999,\n  author = {D. Lahat and T. Adalý and C. Jutten},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Challenges in multimodal data fusion},\n  year = {2014},\n  pages = {101-105},\n  abstract = {In various disciplines, information about the same phenomenon can be acquired from different types of detectors, at different conditions, different observations times, in multiple experiments or subjects, etc. We use the term “modality” to denote each such type of acquisition framework. Due to the rich characteristics of natural phenomena, as well as of the environments in which they occur, it is rare that a single modality can provide complete knowledge of the phenomenon of interest. The increasing availability of several modalities at once introduces new degrees of freedom, which raise questions beyond those related to exploiting each modality separately. It is the aim of this paper to evoke and promote various challenges in multimodal data fusion at the conceptual level, without focusing on any specific model, method or application.},\n  keywords = {sensor fusion;degrees of freedom;acquisition framework;multimodal data fusion;Data integration;Data models;Electroencephalography;Analytical models;Spatial resolution;Imaging;Brain modeling;Data fusion;multimodality},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923723.pdf},\n}\n\n
\n
\n\n\n
\n In various disciplines, information about the same phenomenon can be acquired from different types of detectors, at different conditions, different observations times, in multiple experiments or subjects, etc. We use the term “modality” to denote each such type of acquisition framework. Due to the rich characteristics of natural phenomena, as well as of the environments in which they occur, it is rare that a single modality can provide complete knowledge of the phenomenon of interest. The increasing availability of several modalities at once introduces new degrees of freedom, which raise questions beyond those related to exploiting each modality separately. It is the aim of this paper to evoke and promote various challenges in multimodal data fusion at the conceptual level, without focusing on any specific model, method or application.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Challenges and opportunities of multimodality and Data Fusion in Remote Sensing.\n \n \n \n \n\n\n \n Dalla Mura, M.; Prasad, S.; Pacifici, F.; Gamba, P.; and Chanussot, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 106-110, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ChallengesPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952000,\n  author = {M. {Dalla Mura} and S. Prasad and F. Pacifici and P. Gamba and J. Chanussot},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Challenges and opportunities of multimodality and Data Fusion in Remote Sensing},\n  year = {2014},\n  pages = {106-110},\n  abstract = {Remote sensing is one of the most common ways to extract relevant information about the Earth through observations. Remote sensing acquisitions can be done by both active (SAR, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, diverse information of Earth's surface can be obtained. These devices provide information about the structure (optical, SAR), elevation (LiDAR) and material content (multiand hyperspectral). Together they can provide information about land use (urban, climatic changes), natural disasters (floods, hurricanes, earthquakes), and potential exploitation (oil fields, minerals). In addition, images taken at different times can provide information about damages from floods, fires, seasonal changes etc. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the Data Fusion contests (organized by the IEEE Geoscience and Remote Sensing Society) which has been fostering the development of research and applications on this topic during the past decade.},\n  keywords = {remote sensing;sensor fusion;multimodality;data fusion;remote sensing;passive devices;active devices;SAR;LiDAR;hyperspectral;multispectral;natural disasters;climatic changes;hurricanes;earthquakes;floods;Earth observation;Data Fusion;IEEE Geoscience and Remote Sensing Society;Data integration;Optical imaging;Optical sensors;Synthetic aperture radar;Remote sensing;Laser radar;Spatial resolution;Data fusion;remote sensing;pansharpening;classification;change detection},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924555.pdf},\n}\n\n
\n
\n\n\n
\n Remote sensing is one of the most common ways to extract relevant information about the Earth through observations. Remote sensing acquisitions can be done by both active (SAR, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, diverse information of Earth's surface can be obtained. These devices provide information about the structure (optical, SAR), elevation (LiDAR) and material content (multiand hyperspectral). Together they can provide information about land use (urban, climatic changes), natural disasters (floods, hurricanes, earthquakes), and potential exploitation (oil fields, minerals). In addition, images taken at different times can provide information about damages from floods, fires, seasonal changes etc. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the Data Fusion contests (organized by the IEEE Geoscience and Remote Sensing Society) which has been fostering the development of research and applications on this topic during the past decade.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A flexible modeling framework for coupled matrix and tensor factorizations.\n \n \n \n \n\n\n \n Acar, E.; Nilsson, M.; and Saunders, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 111-115, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952001,\n  author = {E. Acar and M. Nilsson and M. Saunders},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A flexible modeling framework for coupled matrix and tensor factorizations},\n  year = {2014},\n  pages = {111-115},\n  abstract = {Joint analysis of data from multiple sources has proved useful in many disciplines including metabolomics and social network analysis. However, data fusion remains a challenging task in need of data mining tools that can capture the underlying structures from multi-relational and heterogeneous data sources. In order to address this challenge, data fusion has been formulated as a coupled matrix and tensor factorization (CMTF) problem. Coupled factorization problems have commonly been solved using alternating methods and, recently, unconstrained all-at-once optimization algorithms. In this paper, unlike previous studies, in order to have a flexible modeling framework, we use a general-purpose optimization solver that solves for all factor matrices simultaneously and is capable of handling additional linear/nonlinear constraints with a nonlinear objective function. We formulate CMTF as a constrained optimization problem and develop accurate models more robust to overfactoring. The effectiveness of the proposed modeling/algorithmic framework is demonstrated on simulated and real data.},\n  keywords = {data mining;matrix decomposition;optimisation;sensor fusion;tensors;coupled matrix and tensor factorization problem;flexible modeling framework;data fusion;CMTF problem;general-purpose optimization solver;constrained optimization problem;all factor matrices;nonlinear objective function;unconstrained all-at-once optimization algorithms;Tensile stress;Data models;Optimization;Data integration;Nuclear magnetic resonance;Chemicals;Joints;data fusion;tensor factorizations;nonlinear optimization;nonlinear constraints;SNOPT},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569911163.pdf},\n}\n\n
\n
\n\n\n
\n Joint analysis of data from multiple sources has proved useful in many disciplines including metabolomics and social network analysis. However, data fusion remains a challenging task in need of data mining tools that can capture the underlying structures from multi-relational and heterogeneous data sources. In order to address this challenge, data fusion has been formulated as a coupled matrix and tensor factorization (CMTF) problem. Coupled factorization problems have commonly been solved using alternating methods and, recently, unconstrained all-at-once optimization algorithms. In this paper, unlike previous studies, in order to have a flexible modeling framework, we use a general-purpose optimization solver that solves for all factor matrices simultaneously and is capable of handling additional linear/nonlinear constraints with a nonlinear objective function. We formulate CMTF as a constrained optimization problem and develop accurate models more robust to overfactoring. The effectiveness of the proposed modeling/algorithmic framework is demonstrated on simulated and real data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Geometry calibration of distributed microphone arrays exploiting audio-visual correspondences.\n \n \n \n \n\n\n \n Plinge, A.; and Fink, G. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 116-120, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"GeometryPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952002,\n  author = {A. Plinge and G. A. Fink},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Geometry calibration of distributed microphone arrays exploiting audio-visual correspondences},\n  year = {2014},\n  pages = {116-120},\n  abstract = {Smart rooms are used for a growing number of practical applications. They are often equipped with microphones and cameras allowing acoustic and visual tracking of persons. For that, the geometry of the sensors has to be calibrated. In this paper, a method is introduced that calibrates the microphone arrays by using the visual localization of a speaker at a small number of fixed positions. By matching the positions to the direction of arrival (DoA) estimates of the microphone arrays, their absolute position and orientation are derived. Data from a reverberant smart room is used to show that the proposed method can estimate the absolute geometry with about 0.1m and 2° precision. The calibration is good enough for acoustic and multi modal tracking applications and eliminates the need for dedicated calibration measures by using the tracking data itself.},\n  keywords = {acoustic signal processing;direction-of-arrival estimation;microphone arrays;geometry calibration;distributed microphone arrays;audio-visual correspondences;direction of arrival estimates;reverberant smart room;multi modal tracking applications;Microphone arrays;Calibration;Acoustics;Direction-of-arrival estimation;Geometry;Speech;microphone array;distributed sensor network;geometry calibration;speaker tracking},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924703.pdf},\n}\n\n
\n
\n\n\n
\n Smart rooms are used for a growing number of practical applications. They are often equipped with microphones and cameras allowing acoustic and visual tracking of persons. For that, the geometry of the sensors has to be calibrated. In this paper, a method is introduced that calibrates the microphone arrays by using the visual localization of a speaker at a small number of fixed positions. By matching the positions to the direction of arrival (DoA) estimates of the microphone arrays, their absolute position and orientation are derived. Data from a reverberant smart room is used to show that the proposed method can estimate the absolute geometry with about 0.1m and 2° precision. The calibration is good enough for acoustic and multi modal tracking applications and eliminates the need for dedicated calibration measures by using the tracking data itself.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Incorporating higher dimensionality in joint decomposition of EEG and fMRI.\n \n \n \n \n\n\n \n Swinnen, W.; Hunyadi, B.; Acar, E.; Huffe, S. V.; and De Vos, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 121-125, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"IncorporatingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952003,\n  author = {W. Swinnen and B. Hunyadi and E. Acar and S. V. Huffe and M. {De Vos}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Incorporating higher dimensionality in joint decomposition of EEG and fMRI},\n  year = {2014},\n  pages = {121-125},\n  abstract = {EEG-fMRI research to study brain function became popular because of the complementarity of the modalities. Through the use of data-driven approaches such as jointICA, sources extracted from EEG can be linked to regions in fMRI. Joint-ICA in its standard formulation however does not allow for the inclusion of multiple EEG electrodes, so it is a rather arbitrary choice which electrode is used in the analysis. In this study, we explore several ways to include the higher dimensionality of the EEG during a joint decomposition of EEG and fMRI. Our results show that incorporation of multiple channels in the jointICA can reveal new relations between fMRI activation maps and ERP features.},\n  keywords = {bioelectric potentials;biomedical electrodes;biomedical MRI;electroencephalography;feature extraction;medical image processing;neurophysiology;joint decomposition;brain function;data-driven approaches;joint-ICA;multiple EEG electrodes;fMRI activation maps;ERP feature extraction;electroencephalography;functional magnetic resonance imaging;Electroencephalography;Electrodes;Integrated circuits;Visualization;Joints;Physiology;Data mining;Multimodal;EEG-fMRI;joint decomposition;jointICA},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924695.pdf},\n}\n\n
\n
\n\n\n
\n EEG-fMRI research to study brain function became popular because of the complementarity of the modalities. Through the use of data-driven approaches such as jointICA, sources extracted from EEG can be linked to regions in fMRI. Joint-ICA in its standard formulation however does not allow for the inclusion of multiple EEG electrodes, so it is a rather arbitrary choice which electrode is used in the analysis. In this study, we explore several ways to include the higher dimensionality of the EEG during a joint decomposition of EEG and fMRI. Our results show that incorporation of multiple channels in the jointICA can reveal new relations between fMRI activation maps and ERP features.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Secrecy rate optimization for a MIMO secrecy channel based on Stackelberg game.\n \n \n \n \n\n\n \n Chu, Z.; Cumanan, K.; Ding, Z.; Johnston, M.; and Goff, S. L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 126-130, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SecrecyPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952004,\n  author = {Z. Chu and K. Cumanan and Z. Ding and M. Johnston and S. L. Goff},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Secrecy rate optimization for a MIMO secrecy channel based on Stackelberg game},\n  year = {2014},\n  pages = {126-130},\n  abstract = {In this paper, we consider a multi-input-multi-output (MIMO) wiretap channel with a multi-antenna eavesdropper, where a private cooperative jammer is employed to improve the achievable secrecy rate. The legitimate user pays the legitimate transmitter for its secured communication based on the achieved secrecy rate. We first approximate the legitimate transmitter covariance matrix by employing Taylor series expansion, then this secrecy rate problem can be formulated into a Stackelberg game based on a fixed covariance matrix of the transmitter, where the transmitter and the jammer try to maximize their revenues. This secrecy rate maximization problem is formulated into a Stackelberg game where the jammer and the transmitter are the leader and follower of the game, respectively. For the proposed game, Stackelberg equilibrium is analytically derived. Simulation results are provided to show that the revenue functions of the legitimate user and the jammer are concave functions and the Stackelberg equilibrium solution has been validated.},\n  keywords = {antenna arrays;covariance matrices;game theory;MIMO communication;optimisation;telecommunication security;secrecy rate optimization;MIMO secrecy channel;Stackelberg game;multiple-input-multiple-output wiretap channel;multiantenna eavesdropper;private cooperative jammer;achievable secrecy rate;legitimate transmitter;secured communication;legitimate transmitter covariance matrix;Taylor series expansion;fixed covariance matrix;secrecy rate maximization problem;revenue function;concave function;Stackelberg equilibrium solution;Jamming;Games;Transmitters;Interference;Security;MIMO;Covariance matrices;MIMO system;physical-layer secrecy;private jammer;game theory;Stackelberg game},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569907257.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we consider a multi-input-multi-output (MIMO) wiretap channel with a multi-antenna eavesdropper, where a private cooperative jammer is employed to improve the achievable secrecy rate. The legitimate user pays the legitimate transmitter for its secured communication based on the achieved secrecy rate. We first approximate the legitimate transmitter covariance matrix by employing Taylor series expansion, then this secrecy rate problem can be formulated into a Stackelberg game based on a fixed covariance matrix of the transmitter, where the transmitter and the jammer try to maximize their revenues. This secrecy rate maximization problem is formulated into a Stackelberg game where the jammer and the transmitter are the leader and follower of the game, respectively. For the proposed game, Stackelberg equilibrium is analytically derived. Simulation results are provided to show that the revenue functions of the legitimate user and the jammer are concave functions and the Stackelberg equilibrium solution has been validated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast average gossiping under asymmetric links in WSNS.\n \n \n \n \n\n\n \n Asensio-Marco, C.; and Beferull-Lozano, B.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 131-135, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952005,\n  author = {C. Asensio-Marco and B. Beferull-Lozano},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Fast average gossiping under asymmetric links in WSNS},\n  year = {2014},\n  pages = {131-135},\n  abstract = {Wireless Sensor Networks are a recent technology where the nodes cooperate to obtain, in a totally distributed way, certain function of the collected data. An important example of these distributed processes is the average gossip algorithm, which allows the nodes to obtain the global average by only using local data exchanges. This process is traditionally slow, but can be accelerated by introducing geographic information or by exploiting the broadcast nature of the wireless medium. However, when a gossip protocol utilizes long geographic routes or broadcast communications, its convergence is not easily guaranteed due to asymmetry in communications. Alternatively, we propose an asymmetric version of the gossip algorithm that exploits residual information involved in each asymmetric exchange. Our asymmetric gossip algorithm achieves convergence faster than existing studies in the related literature. Numerical results are presented to show clearly the validity and efficiency of our approach.},\n  keywords = {broadcast communication;distributed algorithms;protocols;wireless sensor networks;broadcast communications;gossip protocol;geographic information;local data exchanges;average gossip algorithm;distributed process;wireless sensor networks;asymmetric links;fast average gossiping;Convergence;Unicast;Wireless sensor networks;Vectors;Acceleration;Protocols;Probabilistic logic},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569916839.pdf},\n}\n\n
\n
\n\n\n
\n Wireless Sensor Networks are a recent technology where the nodes cooperate to obtain, in a totally distributed way, certain function of the collected data. An important example of these distributed processes is the average gossip algorithm, which allows the nodes to obtain the global average by only using local data exchanges. This process is traditionally slow, but can be accelerated by introducing geographic information or by exploiting the broadcast nature of the wireless medium. However, when a gossip protocol utilizes long geographic routes or broadcast communications, its convergence is not easily guaranteed due to asymmetry in communications. Alternatively, we propose an asymmetric version of the gossip algorithm that exploits residual information involved in each asymmetric exchange. Our asymmetric gossip algorithm achieves convergence faster than existing studies in the related literature. Numerical results are presented to show clearly the validity and efficiency of our approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Blind separation of sources with finite rate of innovation.\n \n \n \n \n\n\n \n Porter, R.; Tadic, V.; and Achim, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 136-140, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"BlindPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952006,\n  author = {R. Porter and V. Tadic and A. Achim},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Blind separation of sources with finite rate of innovation},\n  year = {2014},\n  pages = {136-140},\n  abstract = {We propose a method for recovering the parameters of periodic signals with finite rate of innovation sampled using a raised cosine pulse. We show that the proposed method exhibits the same numerical stability as existing methods of its type, and we investigate the effect of oversampling on the performance of our method in the presence of noise. Our method can also be applied to non-periodic signals and we assess the efficacy of signal recovery in this case. Finally, we show that the problem of cochannel QPSK signal separation can be converted into a general finite rate of innovation framework, and we test the effectiveness of this approach.},\n  keywords = {blind source separation;quadrature phase shift keying;blind source separation;cosine pulse;nonperiodic signals;signal recovery;cochannel QPSK signal separation;Technological innovation;Phase shift keying;Kernel;Source separation;Noise;Educational institutions;Numerical stability;Raised Cosine;QPSK;Finite Rate of Innovation;Signal Separation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569918087.pdf},\n}\n\n
\n
\n\n\n
\n We propose a method for recovering the parameters of periodic signals with finite rate of innovation sampled using a raised cosine pulse. We show that the proposed method exhibits the same numerical stability as existing methods of its type, and we investigate the effect of oversampling on the performance of our method in the presence of noise. Our method can also be applied to non-periodic signals and we assess the efficacy of signal recovery in this case. Finally, we show that the problem of cochannel QPSK signal separation can be converted into a general finite rate of innovation framework, and we test the effectiveness of this approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimal quantization and power allocation for energy-based distributed sensor detection.\n \n \n \n \n\n\n \n Nurellari, E.; McLernon, D.; Ghogho, M.; and Aldalahmeh, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 141-145, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OptimalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952007,\n  author = {E. Nurellari and D. McLernon and M. Ghogho and S. Aldalahmeh},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Optimal quantization and power allocation for energy-based distributed sensor detection},\n  year = {2014},\n  pages = {141-145},\n  abstract = {We consider the decentralized detection of an unknown deterministic signal in a spatially uncorrelated distributed wireless sensor network. N samples from the signal of interest are gathered by each of the M spatially distributed sensors, and the energy is estimated by each sensor. The sensors send their quantized information over orthogonal channels to the fusion center (FC) which linearly combines them and makes a final decision. We show how by maximizing the modified deflection coefficient we can calculate the optimal transmit power allocation for each sensor and the optimal number of quantization bits to match the channel capacity.},\n  keywords = {telecommunication power management;wireless channels;wireless sensor networks;optimal quantization;power allocation;energy based distributed sensor detection;decentralized detection;wireless sensor network;distributed sensors;orthogonal channels;fusion center;FC;optimal transmit power allocation;channel capacity;quantization bits;optimal number;Optimized production technology;Reactive power;Wireless sensor networks;Resource management;Quantization (signal);Vectors;Distributed detection;soft decision;wireless sensor networks},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924125.pdf},\n}\n\n
\n
\n\n\n
\n We consider the decentralized detection of an unknown deterministic signal in a spatially uncorrelated distributed wireless sensor network. N samples from the signal of interest are gathered by each of the M spatially distributed sensors, and the energy is estimated by each sensor. The sensors send their quantized information over orthogonal channels to the fusion center (FC) which linearly combines them and makes a final decision. We show how by maximizing the modified deflection coefficient we can calculate the optimal transmit power allocation for each sensor and the optimal number of quantization bits to match the channel capacity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A maxmin model for solving channel assignment problem in IEEE 802.11 networks.\n \n \n \n \n\n\n \n Elwekeil, M.; Alghoniemy, M.; Muta, O.; Abdel-Rahman, A.; Furukawa, H.; and Gacanin, H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 146-150, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952008,\n  author = {M. Elwekeil and M. Alghoniemy and O. Muta and A. Abdel-Rahman and H. Furukawa and H. Gacanin},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A maxmin model for solving channel assignment problem in IEEE 802.11 networks},\n  year = {2014},\n  pages = {146-150},\n  abstract = {In this paper, an optimization model for solving the channel assignment problem in multi-cell WLANs is proposed. This model is based on maximizing the minimum distance between access points (APs) that work on the same channel. The proposed model is formulated in the form of a mixed integer linear program (MILP). The main advantage of the proposed algorithm is that it ensures non-overlapping channel assignment with no overhead power measurements. The proposed channel assignment algorithm can be implemented within practical time frames for different topology sizes. Simulation results indicate that the proposed algorithm exhibits better performance than that of the pick-first greedy algorithm and the single channel assignment method.},\n  keywords = {cellular radio;channel allocation;integer programming;linear programming;wireless LAN;maxmin model;channel assignment problem;IEEE 802.11 networks;optimization model;multicell WLAN;minimum distance maximization;access points;AP;mixed integer linear program;MILP;nonoverlapping channel assignment;time frame;topology size;pick-first greedy algorithm;single-channel assignment method;Abstracts;Indexes;Companies;Integrated optics;Buildings;WLAN;IEEE 802.11;channel assignment;integer programming;maxmin problem},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924143.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, an optimization model for solving the channel assignment problem in multi-cell WLANs is proposed. This model is based on maximizing the minimum distance between access points (APs) that work on the same channel. The proposed model is formulated in the form of a mixed integer linear program (MILP). The main advantage of the proposed algorithm is that it ensures non-overlapping channel assignment with no overhead power measurements. The proposed channel assignment algorithm can be implemented within practical time frames for different topology sizes. Simulation results indicate that the proposed algorithm exhibits better performance than that of the pick-first greedy algorithm and the single channel assignment method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Energy efficiency improvements in HetNets by exploiting device-to-device communications.\n \n \n \n\n\n \n Sambo, Y. A.; Shakir, M. Z.; Qaraqe, K. A.; Serpedin, E.; Imran, M. A.; and Ahmed, B.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 151-155, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952009,\n  author = {Y. A. Sambo and M. Z. Shakir and K. A. Qaraqe and E. Serpedin and M. A. Imran and B. Ahmed},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Energy efficiency improvements in HetNets by exploiting device-to-device communications},\n  year = {2014},\n  pages = {151-155},\n  abstract = {The growth in mobile communications has resulted in a significant increase in energy consumption and carbon emissions, which could have serious economic and environmental implications. Consequently, energy consumption has become a key criterion for the design of future mobile communication systems. Device-to-device (D2D) communication has been shown to improve the spectral efficiency and also reduce the power consumption of mobile communication networks. In this paper, we propose a two-tier deployment of D2D communication within a network to reduce the overall power consumption of the network and compared it with full small-cell deployment throughout the network. In this context, we computed the backhaul power consumption of each link in the networks and derived the backhaul energy efficiency expression of the networks. Simulation results show that our proposed network deployment outperforms the network with full small-cell deployment in terms of backhaul power consumption, backhaul energy-efficiency, total power consumption of the tier 2 users and downlink power consumption, thus providing a greener alternative to small-cell deployment.},\n  keywords = {air pollution;cellular radio;energy conservation;energy efficiency improvements;HetNet;device-to-device communications;energy consumption;carbon emissions;economic implication;environmental implication;mobile communication systems;D2D communication;spectral efficiency;mobile communication networks;power consumption reduction;two-tier deployment;full-small-cell deployment;backhaul power consumption;backhaul energy efficiency expression;backhaul energy-efficiency;total power consumption;downlink power consumption;Macrocell networks;Power demand;Downlink;Uplink;Switches;Interference;D2D communication;small-cells;backhaul;power consumption;energy efficiency},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The growth in mobile communications has resulted in a significant increase in energy consumption and carbon emissions, which could have serious economic and environmental implications. Consequently, energy consumption has become a key criterion for the design of future mobile communication systems. Device-to-device (D2D) communication has been shown to improve the spectral efficiency and also reduce the power consumption of mobile communication networks. In this paper, we propose a two-tier deployment of D2D communication within a network to reduce the overall power consumption of the network and compared it with full small-cell deployment throughout the network. In this context, we computed the backhaul power consumption of each link in the networks and derived the backhaul energy efficiency expression of the networks. Simulation results show that our proposed network deployment outperforms the network with full small-cell deployment in terms of backhaul power consumption, backhaul energy-efficiency, total power consumption of the tier 2 users and downlink power consumption, thus providing a greener alternative to small-cell deployment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Numerical characterization for optimal designed waveform to multicarrier systems in 5G.\n \n \n \n \n\n\n \n Hraiech, Z.; Siala, M.; and Abdelkefi, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 156-160, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"NumericalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952010,\n  author = {Z. Hraiech and M. Siala and F. Abdelkefi},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Numerical characterization for optimal designed waveform to multicarrier systems in 5G},\n  year = {2014},\n  pages = {156-160},\n  abstract = {High mobility of terminals constitutes a hot topic that is commonly envisaged for the next Fifth Generation (5G) of mobile communication systems. The wireless propagation channel is a time-frequency variant. This aspect can dramatically damage the waveforms orthogonality that is induced in the Orthogonal frequency division multiplexing (OFDM) signal. Consequently, this results in oppressive Inter-Carrier Interference (ICI) and Inter-Symbol Interference (ISI), which leads to performance degradation in OFDM systems. To efficiently overcome these drawbacks, we developed in [1] an adequate algorithm that maximizes the received Signal to Interference plus Noise Ratio (SINR) by optimizing systematically the OFDM waveforms at the Transmitter (TX) and Receiver (RX) sides. In this paper, we go further by investigating the performance evaluation of this algorithm. We start by testing its robustness against time and frequency synchronization errors. Then, as this algorithm banks on an iterative approach to find the optimal waveforms, we study the impact of the waveform initialization on its convergence. The obtained simulation results confirm the efficiency of this algorithm and its robustness compared to the conventional OFDM schemes, which makes it an appropriate good candidate for 5G systems.},\n  keywords = {intercarrier interference;intersymbol interference;iterative methods;mobility management (mobile radio);OFDM modulation;performance evaluation;synchronisation;time-frequency analysis;wireless channels;numerical characterization;optimal designed waveform;multicarrier systems;5G mobile communication systems;next fifth generation mobile communication systems;high terminal mobility;wireless propagation channel;time-frequency variant;orthogonal frequency division multiplexing;OFDM signal;oppressive intercarrier interference;ICI;intersymbol interference;ISI;performance degradation;received signal to interference plus noise ratio;SINR;transmitter;receiver;frequency synchronization errors;time synchronization errors;iterative approach;Interference;OFDM;Signal to noise ratio;Time-frequency analysis;Optimization;Vectors;Algorithm design and analysis;OFDM;Optimazed Waveforms;Inter-Carrier Interference;Inter-Symbol Interference;SINR},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924697.pdf},\n}\n\n
\n
\n\n\n
\n High mobility of terminals constitutes a hot topic that is commonly envisaged for the next Fifth Generation (5G) of mobile communication systems. The wireless propagation channel is a time-frequency variant. This aspect can dramatically damage the waveforms orthogonality that is induced in the Orthogonal frequency division multiplexing (OFDM) signal. Consequently, this results in oppressive Inter-Carrier Interference (ICI) and Inter-Symbol Interference (ISI), which leads to performance degradation in OFDM systems. To efficiently overcome these drawbacks, we developed in [1] an adequate algorithm that maximizes the received Signal to Interference plus Noise Ratio (SINR) by optimizing systematically the OFDM waveforms at the Transmitter (TX) and Receiver (RX) sides. In this paper, we go further by investigating the performance evaluation of this algorithm. We start by testing its robustness against time and frequency synchronization errors. Then, as this algorithm banks on an iterative approach to find the optimal waveforms, we study the impact of the waveform initialization on its convergence. The obtained simulation results confirm the efficiency of this algorithm and its robustness compared to the conventional OFDM schemes, which makes it an appropriate good candidate for 5G systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimum relay selection for cooperative spectrum sensing and transmission in cognitive networks.\n \n \n \n \n\n\n \n Kartlak, H.; Odabasioglu, N.; and Akan, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 161-165, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OptimumPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952011,\n  author = {H. Kartlak and N. Odabasioglu and A. Akan},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Optimum relay selection for cooperative spectrum sensing and transmission in cognitive networks},\n  year = {2014},\n  pages = {161-165},\n  abstract = {In this paper, cyclostationarity based cooperative spectrum sensing is presented to detect the idle bands and then locate the secondary users into these bands. The aim is to reduce the processing complexity with using a relay for transmission and spectrum sensing. As such, an optimum relay is selected to perform both cooperative communication and cyclostationarity based spectrum sensing. Performance of transmission, probability of detection, and probability of missing are presented via computer simulations. Results show that proposed jointly optimized relay selection scheme provides sufficient performance for both transmission and spectrum sensing.},\n  keywords = {cognitive radio;cooperative communication;optimisation;probability;radio spectrum management;relay networks (telecommunication);signal detection;cooperative spectrum sensing;cognitive networks;cyclostationarity;detection probability;optimized relay selection scheme;cooperative transmission;Relays;Sensors;Cognitive radio;Fading;Signal to noise ratio;Quality of service;Cooperative communication;cognitive network;cooperative communication;relay selection;cyclostationarity based spectrum sensing;cooperative spectrum sensing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925311.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, cyclostationarity based cooperative spectrum sensing is presented to detect the idle bands and then locate the secondary users into these bands. The aim is to reduce the processing complexity with using a relay for transmission and spectrum sensing. As such, an optimum relay is selected to perform both cooperative communication and cyclostationarity based spectrum sensing. Performance of transmission, probability of detection, and probability of missing are presented via computer simulations. Results show that proposed jointly optimized relay selection scheme provides sufficient performance for both transmission and spectrum sensing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Performance analysis of the opportunistic multi-relay network with co-channel interference.\n \n \n \n \n\n\n \n Hussein, J.; Ikki, S.; Boussakta, S.; and Tsimenidis, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 166-170, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PerformancePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952012,\n  author = {J. Hussein and S. Ikki and S. Boussakta and C. Tsimenidis},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Performance analysis of the opportunistic multi-relay network with co-channel interference},\n  year = {2014},\n  pages = {166-170},\n  abstract = {A study of the effect of co-channel interference (CCI) on the performance of opportunistic multi-relay amplify-and-forward cooperative communication network is presented. Precisely, we consider the CCI exists at both relays and destination nodes. Exact equivalent end-to-end signal-to-interference-plus-noise ratio (SINR) is derived. Then, closed-form expressions for both cumulative distribution function (CDF) and probability density function (PDF) of the received SINR at the destination node are obtained. The derived expressions are used to measure the asymptotic outage probability of the system. Numerical results and Matlab simulations are also provided to sustain the correctness of the analytical calculations.},\n  keywords = {amplify and forward communication;cochannel interference;cooperative communication;probability;relay networks (telecommunication);asymptotic outage probability;PDF;probability density function;CDF;cumulative distribution function;SINR;exact equivalent end-to-end signal-to-interference-plus-noise ratio;opportunistic multirelay amplify-and-forward cooperative communication network;CCI;co-channel interference;performance analysis;Relays;Signal to noise ratio;Interchannel interference;Probability density function;Cooperative systems;Fading;opportunistic;cooperative networks;multi-relay;amplify-and-forward;co-channel interference},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925349.pdf},\n}\n\n
\n
\n\n\n
\n A study of the effect of co-channel interference (CCI) on the performance of opportunistic multi-relay amplify-and-forward cooperative communication network is presented. Precisely, we consider the CCI exists at both relays and destination nodes. Exact equivalent end-to-end signal-to-interference-plus-noise ratio (SINR) is derived. Then, closed-form expressions for both cumulative distribution function (CDF) and probability density function (PDF) of the received SINR at the destination node are obtained. The derived expressions are used to measure the asymptotic outage probability of the system. Numerical results and Matlab simulations are also provided to sustain the correctness of the analytical calculations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A unifying view on energy-efficiency metrics in cognitive radio channels.\n \n \n \n \n\n\n \n Masmoudi, R.; Belmega, E. V.; Fijalkow, I.; and Sellami, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 171-175, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952013,\n  author = {R. Masmoudi and E. V. Belmega and I. Fijalkow and N. Sellami},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A unifying view on energy-efficiency metrics in cognitive radio channels},\n  year = {2014},\n  pages = {171-175},\n  abstract = {The objective of this paper is to provide a unifying framework of the most popular energy-efficiency metrics proposed in the wireless communications literature. The target application is a cognitive radio system composed of a secondary user whose goal is to transmit in an optimal energy-efficient manner over several available bands under the interference constraints imposed by the presence of the primary network. It turns out that, the optimal allocation policies maximizing these energy-efficiency metrics can be interpreted as Pareto-optimal points lying on the optimal tradeoff curve between the rate maximization and power minimization bi-criteria joint problem. Using this unifying framework, we provide several interesting parallels and differences between these metrics and the implications w.r.t. the optimal tradeoffs between achievable rates and consumed power.},\n  keywords = {cognitive radio;energy conservation;minimisation;energy-efficiency metrics;cognitive radio channels;wireless communications literature;interference constraints;optimal allocation policies;Pareto-optimal points;rate maximization;power minimization bi-criteria joint problem;Energy efficiency;Measurement;Cognitive radio;Abstracts;Minimization;Bridges;Energy-efficiency metrics;Cognitive radio;Multi-criteria optimization;Pareto optimality;Tradeoffs},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925419.pdf},\n}\n\n
\n
\n\n\n
\n The objective of this paper is to provide a unifying framework of the most popular energy-efficiency metrics proposed in the wireless communications literature. The target application is a cognitive radio system composed of a secondary user whose goal is to transmit in an optimal energy-efficient manner over several available bands under the interference constraints imposed by the presence of the primary network. It turns out that, the optimal allocation policies maximizing these energy-efficiency metrics can be interpreted as Pareto-optimal points lying on the optimal tradeoff curve between the rate maximization and power minimization bi-criteria joint problem. Using this unifying framework, we provide several interesting parallels and differences between these metrics and the implications w.r.t. the optimal tradeoffs between achievable rates and consumed power.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Relevance of Dirichlet process mixtures for modeling interferences in underlay cognitive radio.\n \n \n \n \n\n\n \n Pereira, V.; Ferré, G.; Giremus, A.; and Grivel, E.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 176-180, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RelevancePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952014,\n  author = {V. Pereira and G. Ferré and A. Giremus and E. Grivel},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Relevance of Dirichlet process mixtures for modeling interferences in underlay cognitive radio},\n  year = {2014},\n  pages = {176-180},\n  abstract = {In the field of underlay cognitive radio communications, the signal transmitted by the secondary user is disturbed by incoming signals from primary users. Thus, it is necessary to compensate for this secondary-link degradation at the receiver level. In this paper we use Dirichlet process mixtures (DPM) to relax a priori assumptions on the characteristics of the primary user-induced interference. DPM allow us to model the probability density function of the interference. The latter is estimated jointly with the symbols and the channel of the secondary link by using marginalized particle filtering. Our approach makes it possible to improve the symbol error rate compared with an algorithm that simply models the interference as a Gaussian noise.},\n  keywords = {cognitive radio;error statistics;mixture models;particle filtering (numerical methods);radiofrequency interference;symbol error rate;marginalized particle filtering;user induced interference;a-priori assumptions;secondary link degradation;secondary user;underlay cognitive radio;interference modeling;Dirichlet process mixtures;OFDM;Interference;Cognitive radio;Bayes methods;Vectors;Estimation;Receivers;Dirichlet Process;Cognitive radio;Particle filtering},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925423.pdf},\n}\n\n
\n
\n\n\n
\n In the field of underlay cognitive radio communications, the signal transmitted by the secondary user is disturbed by incoming signals from primary users. Thus, it is necessary to compensate for this secondary-link degradation at the receiver level. In this paper we use Dirichlet process mixtures (DPM) to relax a priori assumptions on the characteristics of the primary user-induced interference. DPM allow us to model the probability density function of the interference. The latter is estimated jointly with the symbols and the channel of the secondary link by using marginalized particle filtering. Our approach makes it possible to improve the symbol error rate compared with an algorithm that simply models the interference as a Gaussian noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Channel adaptive pulse shaping for OQAM-OFDM systems.\n \n \n \n \n\n\n \n Fuhrwerk, M.; Peissig, J.; and Schellmann, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 181-185, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ChannelPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952015,\n  author = {M. Fuhrwerk and J. Peissig and M. Schellmann},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Channel adaptive pulse shaping for OQAM-OFDM systems},\n  year = {2014},\n  pages = {181-185},\n  abstract = {Theory predicts a gain in transmission performance, when adapting pulse shapes of Offset Quadrature Amplitude Modulation (OQAM) Orthogonal Frequency Division Multiplexing (OFDM) systems to delay and Doppler spread in doubly-dispersive channels. Here we investigate the quantitative gains in reconstruction quality and bit error rate (BER) with respect to subcarrier spacing and channel properties. It is shown that it is possible to reduce the uncoded BER by a factor of more than two and the coded BER by a factor of at least four, utilizing only two different pulse shapes. The simulation results show that channel adaptive pulse shaping for OQAM-OFDM systems is a promising concept for future mobile communication systems.},\n  keywords = {error statistics;OFDM modulation;pulse shaping;quadrature amplitude modulation;channel adaptive pulse shaping;OQAM-OFDM systems;offset quadrature amplitude modulation orthogonal frequency division multiplexing systems;Doppler spread;doubly-dispersive channels;bit error rate;BER;subcarrier spacing;reconstruction quality;mobile communication systems;Shape;Bit error rate;Doppler effect;Interference;Delays;Prototypes;OFDM;OQAM-OFDM;Pulse shaping;Prototype filter;FBMC},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925445.pdf},\n}\n\n
\n
\n\n\n
\n Theory predicts a gain in transmission performance, when adapting pulse shapes of Offset Quadrature Amplitude Modulation (OQAM) Orthogonal Frequency Division Multiplexing (OFDM) systems to delay and Doppler spread in doubly-dispersive channels. Here we investigate the quantitative gains in reconstruction quality and bit error rate (BER) with respect to subcarrier spacing and channel properties. It is shown that it is possible to reduce the uncoded BER by a factor of more than two and the coded BER by a factor of at least four, utilizing only two different pulse shapes. The simulation results show that channel adaptive pulse shaping for OQAM-OFDM systems is a promising concept for future mobile communication systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Augmentation and Integrity Monitoring Network and EGNOS performance comparison for train positioning.\n \n \n \n \n\n\n \n Salvatori, P.; Neri, A.; Stallo, C.; Palma, V.; Coluccia, A.; and Rispoli, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 186-190, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AugmentationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952016,\n  author = {P. Salvatori and A. Neri and C. Stallo and V. Palma and A. Coluccia and F. Rispoli},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Augmentation and Integrity Monitoring Network and EGNOS performance comparison for train positioning},\n  year = {2014},\n  pages = {186-190},\n  abstract = {The paper describes the performance comparison between EGNOS system and an Augmentation & Integrity Monitoring Network (AIMN) Location Determination System (LDS) designed for train positioning in terms of PVT accuracy and integrity information. The proposed work is inserted in the scenario of introduction and application of space technologies based on the ERTMS architecture. It foresees to include the EGNOS-Galileo infrastructures in the train control system, with the aim at improving performance, enhancing safety and reducing the investments on the railways circuitry and its maintenance. The performance results will be shown, based on a campaign test acquired on a ring-shaped highway (named Grande Raccordo Anulare (GRA)) around Rome (Italy) to simulate movement of a train on a generic track.},\n  keywords = {railway safety;satellite navigation;EGNOS performance;train positioning;augmentation and integrity monitoring network;AIMN;location determination system;LDS;PVT accuracy;integrity information;space technology;ERTMS architecture;EGNOS-Galileo infrastructures;train control system;safety enhancement;investment reduction;ring-shaped highway;European geostationary navigation overlay service;European railway train management system;global navigation satellite system;GNSS;Abstracts;Receivers;Indexes;Navigation;Computer architecture;Rail transportation;Switches;Global Navigation Satellite System (GNSS);Signal In Space (SIS);European Geostationary Navigation Overlay Service (EGNOS);European Railways Train Management System (ERTMS);Position;Velocity and Time (PVT) estimation;Safety Integrity Level (SIL)},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925007.pdf},\n}\n\n
\n
\n\n\n
\n The paper describes the performance comparison between EGNOS system and an Augmentation & Integrity Monitoring Network (AIMN) Location Determination System (LDS) designed for train positioning in terms of PVT accuracy and integrity information. The proposed work is inserted in the scenario of introduction and application of space technologies based on the ERTMS architecture. It foresees to include the EGNOS-Galileo infrastructures in the train control system, with the aim at improving performance, enhancing safety and reducing the investments on the railways circuitry and its maintenance. The performance results will be shown, based on a campaign test acquired on a ring-shaped highway (named Grande Raccordo Anulare (GRA)) around Rome (Italy) to simulate movement of a train on a generic track.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Parallel performance and energy efficiency of modern video encoders on multithreaded architectures.\n \n \n \n \n\n\n \n Rodríguez-Sánchez, R.; Igual, F. D.; Martínez, J. L.; Mayo, R.; and Quintana-Ortí, E. S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 191-195, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ParallelPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952017,\n  author = {R. Rodríguez-Sánchez and F. D. Igual and J. L. Martínez and R. Mayo and E. S. Quintana-Ortí},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Parallel performance and energy efficiency of modern video encoders on multithreaded architectures},\n  year = {2014},\n  pages = {191-195},\n  abstract = {In this paper we evaluate four mainstream video encoders: H.264/MPEG-4 Advanced Video Coding, Google's VP8, High Efficiency Video Coding, and Google's VP9, studying conventional figures-of-merit such as performance in terms of encoded frames per second, and encoding efficiency in both PSNR and bit-rate of the encoded video sequences. Additionally, two platforms equipped with a large number of cores, representative of current multicore architectures for high-end servers, and equipped with a wattmeter allow us to assess the quality of these video encoders in terms of parallel scalability and energy consumption, which is well-founded given the significant levels of thread concurrency and the impact of the power wall in todays' multicore processors.},\n  keywords = {image sequences;multi-threading;video coding;multicore processors;thread concurrency;energy consumption;parallel scalability;video encoder quality assessment;wattmeter;high-end servers;multicore architecture;encoded video sequences;PSNR;encoding efficiency;Google VP9;high-efficiency video coding;Google VP8;H.264-MPEG-4 advanced video coding;mainstream video encoders;multithreaded architectures;energy efficiency;parallel performance;Video coding;Encoding;PSNR;Multicore processing;Codecs;Servers;Standards;Video encoding;high performance computing;energy consumption;multicore processors},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917281.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we evaluate four mainstream video encoders: H.264/MPEG-4 Advanced Video Coding, Google's VP8, High Efficiency Video Coding, and Google's VP9, studying conventional figures-of-merit such as performance in terms of encoded frames per second, and encoding efficiency in both PSNR and bit-rate of the encoded video sequences. Additionally, two platforms equipped with a large number of cores, representative of current multicore architectures for high-end servers, and equipped with a wattmeter allow us to assess the quality of these video encoders in terms of parallel scalability and energy consumption, which is well-founded given the significant levels of thread concurrency and the impact of the power wall in todays' multicore processors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Coefficient-wise intra prediction for DCT-based image coding.\n \n \n \n \n\n\n \n Matsuda, I.; Kameda, Y.; and Itoh, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 196-200, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Coefficient-wisePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952018,\n  author = {I. Matsuda and Y. Kameda and S. Itoh},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Coefficient-wise intra prediction for DCT-based image coding},\n  year = {2014},\n  pages = {196-200},\n  abstract = {This paper proposes an adaptive intra prediction method for DCT-based image coding. In this method, predicted values in each block are generated in spatial domain like the conventional intra prediction methods. On the other hand, prediction residuals to be encoded are separately calculated in DCT domain, i.e. differences between the original and predicted values are calculated after performing DCT. Such a prediction framework allows us to change the coding process from block-wise order to coefficient-wise one. When the coefficient-wise order is adopted, a block to be predicted is almost always surrounded by partially reconstructed image signals, and therefore, efficient interpolative prediction can be performed. Simulation results indicate that the proposed method is beneficial for removing inter-block correlations of high-frequency components.},\n  keywords = {discrete cosine transforms;image coding;image reconstruction;interpolation;prediction theory;coefficient-wise adaptive intra prediction method;DCT-based image coding;spatial domain;prediction residual;block-wise order;image reconstruction;interpolative prediction;high-frequency component inter-block correlation;discrete cosine transform;Abstracts;Switches;Encoding;Image reconstruction;Discrete cosine transforms;Quantization (signal);Image coding;DCT;intra prediction;progressive JPEG;linear interpolation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924889.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes an adaptive intra prediction method for DCT-based image coding. In this method, predicted values in each block are generated in spatial domain like the conventional intra prediction methods. On the other hand, prediction residuals to be encoded are separately calculated in DCT domain, i.e. differences between the original and predicted values are calculated after performing DCT. Such a prediction framework allows us to change the coding process from block-wise order to coefficient-wise one. When the coefficient-wise order is adopted, a block to be predicted is almost always surrounded by partially reconstructed image signals, and therefore, efficient interpolative prediction can be performed. Simulation results indicate that the proposed method is beneficial for removing inter-block correlations of high-frequency components.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast motion estimation discarding low-impact fractional blocks.\n \n \n \n \n\n\n \n Blasi, S. G.; Zupancic, I.; and Izquierdo, E.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 201-205, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952019,\n  author = {S. G. Blasi and I. Zupancic and E. Izquierdo},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Fast motion estimation discarding low-impact fractional blocks},\n  year = {2014},\n  pages = {201-205},\n  abstract = {Sub-pixel motion estimation is used in most modern video coding schemes to improve the outcomes of motion estimation. The reference frame is interpolated and motion vectors are refined with fractional components to reduce the prediction error. Due to the high complexity of these steps, sub-pixel motion estimation can be very demanding in terms of encoding time and resources. A method to reduce complexity of motion estimation schemes is proposed in this paper based on adaptive precision. A parameter is computed to geometrically characterise each block and select whether fractional refinements are likely to improve coding efficiency or not. The selection is based on an estimate of the actual impact of fractional refinements on the coding performance. The method was implemented within the H.264/AVC standard and is shown achieving considerable time savings with respect to conventional schemes, while ensuring that the performance losses are kept below acceptable limits.},\n  keywords = {interpolation;motion estimation;video coding;fast motion estimation scheme;low-impact fractional blocks;sub-pixel motion estimation;video coding schemes;reference frame interpolation;motion vectors;encoding time;adaptive precision;fractional refinements;coding efficiency;H.264/AVC standard;Encoding;Video coding;Motion estimation;Accuracy;Computational complexity;Standards;Video coding;AVC;motion estimation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925505.pdf},\n}\n\n
\n
\n\n\n
\n Sub-pixel motion estimation is used in most modern video coding schemes to improve the outcomes of motion estimation. The reference frame is interpolated and motion vectors are refined with fractional components to reduce the prediction error. Due to the high complexity of these steps, sub-pixel motion estimation can be very demanding in terms of encoding time and resources. A method to reduce complexity of motion estimation schemes is proposed in this paper based on adaptive precision. A parameter is computed to geometrically characterise each block and select whether fractional refinements are likely to improve coding efficiency or not. The selection is based on an estimate of the actual impact of fractional refinements on the coding performance. The method was implemented within the H.264/AVC standard and is shown achieving considerable time savings with respect to conventional schemes, while ensuring that the performance losses are kept below acceptable limits.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cost function optimization and its hardware design for the Sample Adaptive Offset of HEVC standard.\n \n \n \n \n\n\n \n Rediess, F.; Conceição, R.; Zatt, B.; Porto, M.; and Agostini, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 206-210, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CostPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952020,\n  author = {F. Rediess and R. Conceição and B. Zatt and M. Porto and L. Agostini},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Cost function optimization and its hardware design for the Sample Adaptive Offset of HEVC standard},\n  year = {2014},\n  pages = {206-210},\n  abstract = {This work presents a cost function optimization for the internal decision of the HEVC Sample Adaptive Offset (SAO) filter. The optimization approach is focused on an efficient hardware design implementation, and explores two critical points. The first one focus in the use of fixed-point data instead of float-point data, and the second focus on reduce the number of full multipliers and divisors. The simulations results show that those proposals do not present significant impact on BD-rate measurements. Based on both these two hardware-friendly optimizations, we propose a hardware design for this cost function module. The FPGA synthesis results show that the proposed architecture achieved 521 MHz, and are able to process UHD 8K@120 fps operating at 47 MHz.},\n  keywords = {field programmable gate arrays;optimisation;video coding;cost function optimization;hardware design;adaptive offset;HEVC standard;HEVC sample adaptive offset filter;optimization approach;float-point data;BD-rate measurements;hardware-friendly optimizations;cost function module;FPGA synthesis;frequency 521 MHz;frequency 47 MHz;Hardware;Computer architecture;Cost function;Software;Mathematical model;Equations;Sample Adaptive Offset;HEVC;Video Coding;Hardware Design},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925529.pdf},\n}\n\n
\n
\n\n\n
\n This work presents a cost function optimization for the internal decision of the HEVC Sample Adaptive Offset (SAO) filter. The optimization approach is focused on an efficient hardware design implementation, and explores two critical points. The first one focus in the use of fixed-point data instead of float-point data, and the second focus on reduce the number of full multipliers and divisors. The simulations results show that those proposals do not present significant impact on BD-rate measurements. Based on both these two hardware-friendly optimizations, we propose a hardware design for this cost function module. The FPGA synthesis results show that the proposed architecture achieved 521 MHz, and are able to process UHD 8K@120 fps operating at 47 MHz.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Clustering-based methods for fast epitome generation.\n \n \n \n \n\n\n \n Alain, M.; Guillemot, C.; Thoreau, D.; and Guillotel, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 211-215, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Clustering-basedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952021,\n  author = {M. Alain and C. Guillemot and D. Thoreau and P. Guillotel},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Clustering-based methods for fast epitome generation},\n  year = {2014},\n  pages = {211-215},\n  abstract = {This paper deals with epitome generation, mainly dedicated here to image coding applications. Existing approaches are known to be memory and time consuming due to exhaustive self-similarities search within the image for each non-overlapping block. We propose here a novel approach for epitome construction that first groups close patches together. In a second time the self-similarities search is performed for each group. By limiting the number of exhaustive searches we limit the memory occupation and the processing time. Results show that interesting complexity reduction can be achieved while keeping a good epitome quality (down to 18.08 % of the original memory occupation and 41.39%of the original processing time).},\n  keywords = {image coding;pattern clustering;clustering-based methods;fast epitome generation;image coding;exhaustive self-similarity search;memory occupation;complexity reduction;nonoverlapping block;epitome construction;Image reconstruction;Complexity theory;Bismuth;PSNR;Image coding;Approximation methods;Cities and towns;Epitome;clustering;image coding},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926207.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with epitome generation, mainly dedicated here to image coding applications. Existing approaches are known to be memory and time consuming due to exhaustive self-similarities search within the image for each non-overlapping block. We propose here a novel approach for epitome construction that first groups close patches together. In a second time the self-similarities search is performed for each group. By limiting the number of exhaustive searches we limit the memory occupation and the processing time. Results show that interesting complexity reduction can be achieved while keeping a good epitome quality (down to 18.08 % of the original memory occupation and 41.39%of the original processing time).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Quality assessment of chromatic variations: A study of Full-Reference and No-Reference Metrics.\n \n \n \n \n\n\n \n Bernardo, M. V.; Pinheiro, A. M. G.; Fiadeiro, P. T.; and Pereira, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 216-220, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"QualityPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952022,\n  author = {M. V. Bernardo and A. M. G. Pinheiro and P. T. Fiadeiro and M. Pereira},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Quality assessment of chromatic variations: A study of Full-Reference and No-Reference Metrics},\n  year = {2014},\n  pages = {216-220},\n  abstract = {This work describes a comparative study on the ability of Full-Reference versus No-Reference quality metrics to measure the Quality of Experience created by images that suffer chromatic variations. Considering this, some well known Full-Reference (PSNR, UQI, MSSIM) and No-Reference (GM, FTM, RTBM) will be compared with the MOS results. Although the quality metrics considered are usually applied to the luminance component, in this study they are applied to the Y, Cb, Cr components separately. The result of the three components average metrics was also considered, because only the image chromatic components have been changed resulting in similar values of luminance. The correlation estimates show that the Full-Reference Metrics namely the MSSIM and the UQI provide a good representation of the subjective results. Moreover, the studied No-Reference metrics also provide an acceptable representation, although their reliability is less effective.},\n  keywords = {image processing;quality of experience;chromatic variation quality assessment;no-reference quality metrics;full-reference quality metrics;PSNR;UQI;MSSIM;GM;FTM;RTBM;MOS;luminance component;Cr components;Cb components;Y components;image chromatic components;component average metrics;reliability;peak signal-to-noise ratio;universal image quality index;mean structural similarity index;gradient metric;frequency threshold metric;Riemannian tensor based metric;mean opinion score;Measurement;PSNR;Image color analysis;Image quality;Quality assessment;Correlation;Visualization;Quality of Experience;Mean Opinion Score;Quality Metrics;Image Quality},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923433.pdf},\n}\n\n
\n
\n\n\n
\n This work describes a comparative study on the ability of Full-Reference versus No-Reference quality metrics to measure the Quality of Experience created by images that suffer chromatic variations. Considering this, some well known Full-Reference (PSNR, UQI, MSSIM) and No-Reference (GM, FTM, RTBM) will be compared with the MOS results. Although the quality metrics considered are usually applied to the luminance component, in this study they are applied to the Y, Cb, Cr components separately. The result of the three components average metrics was also considered, because only the image chromatic components have been changed resulting in similar values of luminance. The correlation estimates show that the Full-Reference Metrics namely the MSSIM and the UQI provide a good representation of the subjective results. Moreover, the studied No-Reference metrics also provide an acceptable representation, although their reliability is less effective.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Chromatic variations on 3D video and QoE.\n \n \n \n \n\n\n \n Piedade, D.; Bernardoy, M. V.; Fiadeiro, P. T.; Pinheiro, A. M. G.; and Pereira, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 221-225, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ChromaticPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952023,\n  author = {D. Piedade and M. V. Bernardoy and P. T. Fiadeiro and A. M. G. Pinheiro and M. Pereira},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Chromatic variations on 3D video and QoE},\n  year = {2014},\n  pages = {221-225},\n  abstract = {In this paper a study on the perceived quality that results of chromatic variations in 3D video is reported. The testing videos were represented in the CIE 1976 (L*a*b*) color space, and their colors were initially subdivided into clusters based on their similarity. Predefined chromatic errors were applied to these color clusters. These videos were shown to subjects that were asked to rank their quality based on the colors naturalness. The Mean Opinion Scores were computed and the sensibility to chromatic changes on 3D video was quantified. Moreover, attention maps were obtained and a short study on the changes of the visual saliency in the presence of these chromatic variations is also reported.},\n  keywords = {image colour analysis;quality of experience;video signal processing;chromatic variations;3D video;quality of experience;QoE;perceived quality;CIE 1976 color space;chromatic errors;color clusters;colors naturalness;mean opinion scores;attention maps;visual saliency;Image color analysis;Three-dimensional displays;Streaming media;Quality assessment;Color;Video sequences;Video recording;Quality of Experience;3D Video;Mean Opinion Score;Visual attention},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923655.pdf},\n}\n\n
\n
\n\n\n
\n In this paper a study on the perceived quality that results of chromatic variations in 3D video is reported. The testing videos were represented in the CIE 1976 (L*a*b*) color space, and their colors were initially subdivided into clusters based on their similarity. Predefined chromatic errors were applied to these color clusters. These videos were shown to subjects that were asked to rank their quality based on the colors naturalness. The Mean Opinion Scores were computed and the sensibility to chromatic changes on 3D video was quantified. Moreover, attention maps were obtained and a short study on the changes of the visual saliency in the presence of these chromatic variations is also reported.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Color information in a model of saliency.\n \n \n \n \n\n\n \n Hamel, S.; Guyader, N.; Pellerin, D.; and Houzet, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 226-230, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ColorPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952024,\n  author = {S. Hamel and N. Guyader and D. Pellerin and D. Houzet},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Color information in a model of saliency},\n  year = {2014},\n  pages = {226-230},\n  abstract = {Bottom-up saliency models have been developed to predict the location of gaze according to the low level features of visual scenes, such as intensity, color, frequency and motion. We investigate in this paper the contribution of color features in computing the bottom-up saliency. We incorporated a chrominance pathway to a luminance-based model (Marat et al. [1]). We evaluated the performance of the model with and without chrominance pathway. We added an efficient multi-GPU implementation of the chrominance pathway to the parallel implementation of the luminance-based model proposed by Rahman et al. [2], preserving real time solution. Results show that color information improves the performance of the saliency model in predicting eye positions.},\n  keywords = {gaze tracking;graphics processing units;image colour analysis;real-time systems;color information;bottom-up saliency;gaze location;visual scenes;color features;chrominance pathway;luminance-based model;multiGPU implementation;real time solution;eye positions;Image color analysis;Mathematical model;Visualization;Computational modeling;Graphics processing units;Gray-scale;Predictive models;color information;visual saliency;video;GPU},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925829.pdf},\n}\n\n
\n
\n\n\n
\n Bottom-up saliency models have been developed to predict the location of gaze according to the low level features of visual scenes, such as intensity, color, frequency and motion. We investigate in this paper the contribution of color features in computing the bottom-up saliency. We incorporated a chrominance pathway to a luminance-based model (Marat et al. [1]). We evaluated the performance of the model with and without chrominance pathway. We added an efficient multi-GPU implementation of the chrominance pathway to the parallel implementation of the luminance-based model proposed by Rahman et al. [2], preserving real time solution. Results show that color information improves the performance of the saliency model in predicting eye positions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Total variation reconstruction for compressive sensing using nonlocal Lagrangian multiplier.\n \n \n \n \n\n\n \n Van Trinh, C.; Quoc Dinh, K.; Anh Nguyen, V.; and Jeon, B.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 231-235, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"TotalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952025,\n  author = {C. {Van Trinh} and K. {Quoc Dinh} and V. {Anh Nguyen} and B. Jeon},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Total variation reconstruction for compressive sensing using nonlocal Lagrangian multiplier},\n  year = {2014},\n  pages = {231-235},\n  abstract = {Total variation has proved its effectiveness in solving inverse problems for compressive sensing. Besides, the nonlocal means filter used as regularization preserves textures well in recovered images, but it is quite complex to implement. In this paper, based on existence of both noise and image information in the Lagrangian multiplier, we propose a simple method called nonlocal Lagrangian multiplier (NLLM) in order to reduce noise while boosting useful image information. Experimental results show that the proposed NLLM is superior both in subjective and objective qualities of recovered image over other recovery algorithms.},\n  keywords = {compressed sensing;filtering theory;image denoising;image reconstruction;image texture;compressive sensing;nonlocal Lagrangian multiplier;nonlocal mean filter;image recovery;image texture;image information;NLLM;noise reduction;total variation reconstruction;TV;Image reconstruction;Optimization;Compressed sensing;PSNR;Information filtering;Compressive sensing;Total Variation;Nonlocal Means Filter;Nonlocal Lagrangian multiplier},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569908993.pdf},\n}\n\n
\n
\n\n\n
\n Total variation has proved its effectiveness in solving inverse problems for compressive sensing. Besides, the nonlocal means filter used as regularization preserves textures well in recovered images, but it is quite complex to implement. In this paper, based on existence of both noise and image information in the Lagrangian multiplier, we propose a simple method called nonlocal Lagrangian multiplier (NLLM) in order to reduce noise while boosting useful image information. Experimental results show that the proposed NLLM is superior both in subjective and objective qualities of recovered image over other recovery algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Noise robust local phase coherence based method for image sharpness assessment.\n \n \n \n\n\n \n Seršić, D.; and Sović, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 236-240, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952026,\n  author = {D. Seršić and A. Sović},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Noise robust local phase coherence based method for image sharpness assessment},\n  year = {2014},\n  pages = {236-240},\n  abstract = {Image sharpness assessment is a very important issue in image acquisition and processing. Novel approaches in no-reference image sharpness assessment methods are based on local phase coherence (LPC), rather than edge or frequency content analysis. It has been shown that the LPC based methods are closer to human observer assessments. In this paper, we propose carefully designed complex wavelets that provide a good tool for the local phase estimation. Moreover, we take a special care of noise. We apply thresholding in the wavelet domain and merge several estimates to achieve statistical robustness in the presence of noise. It results in the sharpness index that over-performs previously reported methods.},\n  keywords = {image processing;phase estimation;wavelet transforms;noise robust local phase coherence based method;image acquisition;image processing;no-reference image sharpness assessment methods;LPC based methods;frequency content analysis;edge content analysis;human observer assessments;complex wavelets;local phase estimation;statistical robustness;wavelet domain;sharpness index;Image edge detection;Coherence;Noise;Noise measurement;Wavelet transforms;Indexes;Robustness;complex wavelet transform;local phase coherence;image sharpness assessment;image blur},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Image sharpness assessment is a very important issue in image acquisition and processing. Novel approaches in no-reference image sharpness assessment methods are based on local phase coherence (LPC), rather than edge or frequency content analysis. It has been shown that the LPC based methods are closer to human observer assessments. In this paper, we propose carefully designed complex wavelets that provide a good tool for the local phase estimation. Moreover, we take a special care of noise. We apply thresholding in the wavelet domain and merge several estimates to achieve statistical robustness in the presence of noise. It results in the sharpness index that over-performs previously reported methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Entropy-constrained dense disparity map estimation algorithm for stereoscopic images.\n \n \n \n \n\n\n \n Kadaikar, A.; Mokraoui, A.; and Dauphin, G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 241-245, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Entropy-constrainedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952027,\n  author = {A. Kadaikar and A. Mokraoui and G. Dauphin},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Entropy-constrained dense disparity map estimation algorithm for stereoscopic images},\n  year = {2014},\n  pages = {241-245},\n  abstract = {This paper deals with the stereo matching problem to estimate a dense disparity map. Traditionally a matching metric such as mean square error distortion is adopted to select the best matches associated with disparities. However several disparities related to a given pixel may satisfy the distortion criterion although quite often the choice that is made does not necessarily meet the coding objective. An entropy-constrained disparity optimization approach is developed where the traditional matching metric is replaced by a joint entropy-distortion metric so that the selected disparities reduce not only the reconstructed image distortion but also the entropy disparity. The algorithm sequentially builds a tree avoiding a full search and ensuring good rate-distortion performance. At each tree depth, the M-best retained paths are extended to build new paths to which are assigned entropy-distortion metrics. Simulations show that our algorithm provides better results than dynamic programming algorithm.},\n  keywords = {entropy;image matching;image reconstruction;mean square error methods;rate distortion theory;stereo image processing;M-best retained paths;rate-distortion performance;reconstructed image distortion;matching metric;entropy-constrained disparity optimization;distortion criterion;mean square error distortion;stereo matching problem;entropy-constrained dense disparity map estimation;stereoscopic images;Stereoscopic images;matching;disparity;entropy;optimization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925545.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with the stereo matching problem to estimate a dense disparity map. Traditionally a matching metric such as mean square error distortion is adopted to select the best matches associated with disparities. However several disparities related to a given pixel may satisfy the distortion criterion although quite often the choice that is made does not necessarily meet the coding objective. An entropy-constrained disparity optimization approach is developed where the traditional matching metric is replaced by a joint entropy-distortion metric so that the selected disparities reduce not only the reconstructed image distortion but also the entropy disparity. The algorithm sequentially builds a tree avoiding a full search and ensuring good rate-distortion performance. At each tree depth, the M-best retained paths are extended to build new paths to which are assigned entropy-distortion metrics. Simulations show that our algorithm provides better results than dynamic programming algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Motion estimation for Super-resolution based on recognition of error artifacts.\n \n \n \n \n\n\n \n Stojkovic, A.; and Ivanovski, Z.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 246-250, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"MotionPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952028,\n  author = {A. Stojkovic and Z. Ivanovski},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Motion estimation for Super-resolution based on recognition of error artifacts},\n  year = {2014},\n  pages = {246-250},\n  abstract = {The work presents an effective approach for subpixel motion estimation for Super-resolution (SR). The objective is to improve the quality of the estimated SR image by increasing the accuracy of the motion vectors used in the SR procedure. The correction of the motion vectors is based on appearance of error artifacts in the SR image, introduced due to registration errors. First, SR is performed using full pixel accuracy motion vectors obtained using full search block matching algorithm (FS-BMA). Then, machine learning based method is applied on the resulting images in order to detect and classify artifacts introduced due to missing subpixel components of the motion vectors. The outcome of the classification is a subpixel component of the motion vector. In the final step, SR process is repeated using the corrected (subpixel accuracy) motion vectors.},\n  keywords = {image classification;image matching;image registration;image resolution;learning (artificial intelligence);motion estimation;motion estimation;SR image quality improvement;super resolution;error artifact recognition;motion vector correction;full search block matching algorithm;machine learning based method;registration error;artifact classification;subpixel component classification;Vectors;Image resolution;Accuracy;Support vector machine classification;Motion estimation;Image edge detection;Feature extraction;super-resolution;image registration;machine learning;artifacts detection},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926251.pdf},\n}\n\n
\n
\n\n\n
\n The work presents an effective approach for subpixel motion estimation for Super-resolution (SR). The objective is to improve the quality of the estimated SR image by increasing the accuracy of the motion vectors used in the SR procedure. The correction of the motion vectors is based on appearance of error artifacts in the SR image, introduced due to registration errors. First, SR is performed using full pixel accuracy motion vectors obtained using full search block matching algorithm (FS-BMA). Then, machine learning based method is applied on the resulting images in order to detect and classify artifacts introduced due to missing subpixel components of the motion vectors. The outcome of the classification is a subpixel component of the motion vector. In the final step, SR process is repeated using the corrected (subpixel accuracy) motion vectors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Single pass dependent bit allocation for spatial scalability coding of H.264/SVC.\n \n \n \n \n\n\n \n Atta, R.; Abdel-Kader, R. F.; and Abd-AlRahem, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 251-255, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SinglePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952029,\n  author = {R. Atta and R. F. Abdel-Kader and A. Abd-AlRahem},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Single pass dependent bit allocation for spatial scalability coding of H.264/SVC},\n  year = {2014},\n  pages = {251-255},\n  abstract = {This paper investigates the problem of bit allocation for spatial scalability coding of H.264/SVC. Little prior work deal with the H.264/SVC bit allocation problem considering the correlation between the enhancement and base layers. Nevertheless, most of the bit allocation algorithms suffer from high computational complexity which grows significantly with the number of layers. In this paper, a single-pass spatial layer bit allocation algorithm, based on dependent Rate-Distortion modeling is proposed. In this algorithm, the R-D model parameters are adaptively updated during the coding process. Experimental results demonstrate that the proposed algorithm achieves a significant improvement in the coding gain as compared to the multi-pass model-based algorithm and the Joint Scalable Video Model reference software algorithm.},\n  keywords = {computational complexity;correlation methods;video coding;H.264 spatial scalability coding;SVC spatial scalability coding;correlation method;computational complexity;single pass spatial layer bit allocation algorithm;rate-distortion model;R-D model parameter;scalable video coding;Bit rate;Encoding;Adaptation models;Static VAr compensators;Video coding;Computational modeling;Mathematical model;Dependent R-D models;bit allocation;H.264/scalable video coding (SVC);spatial scalability},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569919819.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates the problem of bit allocation for spatial scalability coding of H.264/SVC. Little prior work deal with the H.264/SVC bit allocation problem considering the correlation between the enhancement and base layers. Nevertheless, most of the bit allocation algorithms suffer from high computational complexity which grows significantly with the number of layers. In this paper, a single-pass spatial layer bit allocation algorithm, based on dependent Rate-Distortion modeling is proposed. In this algorithm, the R-D model parameters are adaptively updated during the coding process. Experimental results demonstrate that the proposed algorithm achieves a significant improvement in the coding gain as compared to the multi-pass model-based algorithm and the Joint Scalable Video Model reference software algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Compressive sensing spectrum recovery from quantized measurements in 28nm SOI CMOS.\n \n \n \n\n\n \n Bellasi, D.; Bettini, L.; Burger, T.; Benkeser, C.; Huang, Q.; and Studer, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 256-260, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952030,\n  author = {D. Bellasi and L. Bettini and T. Burger and C. Benkeser and Q. Huang and C. Studer},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Compressive sensing spectrum recovery from quantized measurements in 28nm SOI CMOS},\n  year = {2014},\n  pages = {256-260},\n  abstract = {Spectral activity detection of wideband radio-frequency (RF) signals for cognitive radios typically requires expensive and energy-inefficient analog-to-digital converters (ADCs). Fortunately, the RF spectrum is-in many practical situations-sparsely populated, which enables the design of so called analog-to-information (A2I) converters. A2I converters are capable of acquiring and extracting the spectral activity information at low cost and low power by means of compressive sensing (CS). In this paper, we present a high-throughput spectrum recovery stage for CS-based wideband A2I converters. The recovery stage is designed for a CS-based signal acquisition front-end that performs pseudo-random subsampling in combination with coarse quantization. High-throughput spectrum activity detection from such coarsely quantized and compressive measurements is achieved by means of a massively-parallel VLSI design of a novel accelerated sparse signal dequantization (ASSD) algorithm. The resulting design is implemented in 28 nm SOI CMOS and able to reconstruct 215-point frequency-sparse RF spectra at a rate of more than 7.6 k reconstructions/second.},\n  keywords = {analogue-digital conversion;CMOS integrated circuits;cognitive radio;compressed sensing;VLSI;compressive sensing spectrum recovery;quantized measurements;SOI CMOS;spectral activity detection;wideband radio frequency signals;RF signals;cognitive radios;analog-to-digital converters;ADC;analog-to-information converters;A2I converters;spectral activity information;CS;signal acquisition;parallel VLSI design;accelerated sparse signal dequantization;ASSD algorithm;Approximation methods;Very large scale integration;CMOS integrated circuits;Sensors;Wideband;Radio frequency;Q measurement},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Spectral activity detection of wideband radio-frequency (RF) signals for cognitive radios typically requires expensive and energy-inefficient analog-to-digital converters (ADCs). Fortunately, the RF spectrum is-in many practical situations-sparsely populated, which enables the design of so called analog-to-information (A2I) converters. A2I converters are capable of acquiring and extracting the spectral activity information at low cost and low power by means of compressive sensing (CS). In this paper, we present a high-throughput spectrum recovery stage for CS-based wideband A2I converters. The recovery stage is designed for a CS-based signal acquisition front-end that performs pseudo-random subsampling in combination with coarse quantization. High-throughput spectrum activity detection from such coarsely quantized and compressive measurements is achieved by means of a massively-parallel VLSI design of a novel accelerated sparse signal dequantization (ASSD) algorithm. The resulting design is implemented in 28 nm SOI CMOS and able to reconstruct 215-point frequency-sparse RF spectra at a rate of more than 7.6 k reconstructions/second.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Unbiased RLS identification of errors-in-variables models in the presence of correlated noise.\n \n \n \n \n\n\n \n Arablouei, R.; Doğançay, K.; and Adali, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 261-265, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"UnbiasedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952031,\n  author = {R. Arablouei and K. Doğançay and T. Adali},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Unbiased RLS identification of errors-in-variables models in the presence of correlated noise},\n  year = {2014},\n  pages = {261-265},\n  abstract = {We propose an unbiased recursive-least-squares(RLS)-type algorithm for errors-in-variables system identification when the input noise is colored and correlated with the output noise. To derive the proposed algorithm, which we call unbiased RLS (URLS), we formulate an exponentially-weighted least-squares problem that yields an unbiased estimate. Then, we solve the associated normal equations utilizing the dichotomous coordinate-descent iterations. Simulation results show that the estimation performance of the proposed URLS algorithm is similar to that of a previously proposed bias-compensated RLS (BCRLS) algorithm. However, the URLS algorithm has appreciably lower computational complexity as well as improved numerical stability compared with the BCRLS algorithm.},\n  keywords = {least squares approximations;recursive estimation;unbiaseD RLS identification;correlated noise;recursive-least-squares-type algorithm;errors-in-variable system identification;URLS algorithm;exponentially-weighted least-squares problem;dichotomous coordinate-descent iterations;bias-compensated RLS algorithm;BCRLS algorithm;Noise;Uniform resource locators;Abstracts;Indexes;Vectors;Complexity theory;Field programmable gate arrays;Adaptive estimation;dichotomous coordinate-descent algorithm;errors-in-variables modeling;recursive least-squares;system identification},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569909475.pdf},\n}\n\n
\n
\n\n\n
\n We propose an unbiased recursive-least-squares(RLS)-type algorithm for errors-in-variables system identification when the input noise is colored and correlated with the output noise. To derive the proposed algorithm, which we call unbiased RLS (URLS), we formulate an exponentially-weighted least-squares problem that yields an unbiased estimate. Then, we solve the associated normal equations utilizing the dichotomous coordinate-descent iterations. Simulation results show that the estimation performance of the proposed URLS algorithm is similar to that of a previously proposed bias-compensated RLS (BCRLS) algorithm. However, the URLS algorithm has appreciably lower computational complexity as well as improved numerical stability compared with the BCRLS algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A 128∶2048/1536 point FFT hardware implementation with output pruning.\n \n \n \n\n\n \n Ayhan, T.; Dehaene, W.; and Verhelst, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 266-270, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952032,\n  author = {T. Ayhan and W. Dehaene and M. Verhelst},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A 128∶2048/1536 point FFT hardware implementation with output pruning},\n  year = {2014},\n  pages = {266-270},\n  abstract = {In this work, an FFT architecture supporting variable FFT sizes, 128~2048/1536, is proposed. This implementation is a combination of a 2p point Common Factor FFT and a 3 point DFT. Various FFT output pruning techniques for this architecture are discussed in terms of memory and control logic overhead. It is shown that the used Prime Factor FFT as an FFT in the 1536 point FFT is able to increase throughput by exploiting single tone pruning with low control logic overhead. The proposed FFT processor is implemented on a Xilinx Virtex 5 FPGA. It occupies only 3148 LUTs and 612 kb memory in FGPA and calculates 1536 point FFT less than 3092 clock cycles with output pruned settings.},\n  keywords = {fast Fourier transforms;field programmable gate arrays;FFT hardware implementation;output pruning;FFT architecture;common factor FFT;FFT output pruning;control logic;memory logic;prime factor FFT;Xilinx Virtex 5 FPGA;Fast Fourier Transform;Discrete Fourier transforms;Computer architecture;Hardware;Throughput;Field programmable gate arrays;Indexes;Signal processing;FFT Pruning;FPGA Implementation;LTE;Variable size FFT;Prime Factor FFT},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n In this work, an FFT architecture supporting variable FFT sizes, 128 2048/1536, is proposed. This implementation is a combination of a 2p point Common Factor FFT and a 3 point DFT. Various FFT output pruning techniques for this architecture are discussed in terms of memory and control logic overhead. It is shown that the used Prime Factor FFT as an FFT in the 1536 point FFT is able to increase throughput by exploiting single tone pruning with low control logic overhead. The proposed FFT processor is implemented on a Xilinx Virtex 5 FPGA. It occupies only 3148 LUTs and 612 kb memory in FGPA and calculates 1536 point FFT less than 3092 clock cycles with output pruned settings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n GPU parallel implementation of the approximate K-SVD algorithm using OpenCL.\n \n \n \n \n\n\n \n Irofti, P.; and Dumitrescu, B.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 271-275, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"GPUPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952053,\n  author = {P. Irofti and B. Dumitrescu},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {GPU parallel implementation of the approximate K-SVD algorithm using OpenCL},\n  year = {2014},\n  pages = {271-275},\n  abstract = {Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. We investigate a parallel version of the approximate K-SVD algorithm, where multiple atoms are updated simultaneously, and implement it using OpenCL, for execution on graphics processing units (GPU). This not only allows reducing the execution time with respect to the standard sequential version, but also gives dictionaries with which the training data are better approximated. We present numerical evidence supporting this somewhat surprising conclusion and discuss in detail several implementation choices and difficulties.},\n  keywords = {approximation theory;graphics processing units;parallel programming;singular value decomposition;GPU parallel implementation;approximate K-SVD algorithm;OpenCL;training dictionaries;sparse representations;graphics processing units;Graphics processing units;Dictionaries;Matching pursuit algorithms;Sparse matrices;Kernel;Approximation algorithms;Parallel processing;sparse representation;dictionary design;parallel algorithm;GPU;OpenCL},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923277.pdf},\n}\n\n
\n
\n\n\n
\n Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. We investigate a parallel version of the approximate K-SVD algorithm, where multiple atoms are updated simultaneously, and implement it using OpenCL, for execution on graphics processing units (GPU). This not only allows reducing the execution time with respect to the standard sequential version, but also gives dictionaries with which the training data are better approximated. We present numerical evidence supporting this somewhat surprising conclusion and discuss in detail several implementation choices and difficulties.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A method for early-splitting of HEVC inter blocks based on decision trees.\n \n \n \n \n\n\n \n Correa, G.; Assuncao, P.; Agostini, L.; and da Silva Cruz , L. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 276-280, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952054,\n  author = {G. Correa and P. Assuncao and L. Agostini and L. A. {da Silva Cruz}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A method for early-splitting of HEVC inter blocks based on decision trees},\n  year = {2014},\n  pages = {276-280},\n  abstract = {The High Efficiency Video Coding (HEVC) standard provides a large improvement in terms of compression efficiency in comparison to its predecessors, mainly due to the introduction of new coding tools and more flexible data structures. However, since much more options are tested in a Rate-Distortion (R-D) optimization scheme, such improvement is accompanied by a significant increase in the encoding computational complexity. We propose in this paper a novel method for efficient early-splitting decision of inter-predicted Coding Blocks (CB). The method employs a set of decision trees which are trained using information from unconstrained HEVC encoding runs. The resulting early-splitting decision process has an accuracy of 86% with a negligible computational overhead and an average computational complexity decrease of 42% at the cost of a very small Bjontegaard Delta (BD)-rate increase (0.3%).},\n  keywords = {block codes;computational complexity;data compression;data structures;decision trees;optimisation;rate distortion theory;video coding;high efficiency video coding standard;compression efficiency;data structure;rate-distortion optimization scheme;R-D optimization scheme;encoding computational complexity;interpredicted coding block early-splitting decision method;HEVC interpredicted CB early-splitting decision method;decision trees;Bjontegaard Delta rate;BD rate;Encoding;Decision trees;Training;Computational complexity;Image coding;Correlation;inter mode decision;early-splitting;data mining;decision trees;HEVC},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569910441.pdf},\n}\n\n
\n
\n\n\n
\n The High Efficiency Video Coding (HEVC) standard provides a large improvement in terms of compression efficiency in comparison to its predecessors, mainly due to the introduction of new coding tools and more flexible data structures. However, since much more options are tested in a Rate-Distortion (R-D) optimization scheme, such improvement is accompanied by a significant increase in the encoding computational complexity. We propose in this paper a novel method for efficient early-splitting decision of inter-predicted Coding Blocks (CB). The method employs a set of decision trees which are trained using information from unconstrained HEVC encoding runs. The resulting early-splitting decision process has an accuracy of 86% with a negligible computational overhead and an average computational complexity decrease of 42% at the cost of a very small Bjontegaard Delta (BD)-rate increase (0.3%).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dynamic motion vector refreshing for enhanced error resilience in HEVC.\n \n \n \n \n\n\n \n Carreira, J.; De Silva, V.; Ekmekcioglu, E.; Kondoz, A.; Assuncao, P.; and Faria, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 281-285, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DynamicPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952055,\n  author = {J. Carreira and V. {De Silva} and E. Ekmekcioglu and A. Kondoz and P. Assuncao and S. Faria},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Dynamic motion vector refreshing for enhanced error resilience in HEVC},\n  year = {2014},\n  pages = {281-285},\n  abstract = {The high level of compression efficiency achieved by HEVC coding techniques decreases the error resilience performance under error prone conditions. This paper addresses the error resiliency of the HEVC standard, focusing on the new motion estimation tools. It is shown that the temporal dependency of motion information is comparatively higher than that in the H.264/AVC standard, causing an increase in the error propagation. Based on this evidence, this paper proposes a method to make intelligent use of temporal motion vector (MV) candidates during the motion estimation process, in order to decrease the temporal dependency, and improve the error resiliency without penalising the rate-distortion performance. The simulation results show that the proposed method improves the error resilience under tested conditions by increasing the video quality by up to 1.7 dB in average, compared to the reference method that always enables temporal MV candidates.},\n  keywords = {data compression;motion estimation;video coding;dynamic motion vector refreshing;enhanced error resiliency;compression efficiency;HEVC coding technique;error prone condition;HEVC standard;motion estimation tool;temporal dependency;motion information;H.264-AVC standard;error propagation;temporal motion vector candidate;temporal MV candidate;motion estimation process;rate-distortion performance;video quality;reference method;Video coding;Standards;Resilience;Loss measurement;Vectors;Robustness;Encoding;HEVC;video coding;error resilience},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925063.pdf},\n}\n\n
\n
\n\n\n
\n The high level of compression efficiency achieved by HEVC coding techniques decreases the error resilience performance under error prone conditions. This paper addresses the error resiliency of the HEVC standard, focusing on the new motion estimation tools. It is shown that the temporal dependency of motion information is comparatively higher than that in the H.264/AVC standard, causing an increase in the error propagation. Based on this evidence, this paper proposes a method to make intelligent use of temporal motion vector (MV) candidates during the motion estimation process, in order to decrease the temporal dependency, and improve the error resiliency without penalising the rate-distortion performance. The simulation results show that the proposed method improves the error resilience under tested conditions by increasing the video quality by up to 1.7 dB in average, compared to the reference method that always enables temporal MV candidates.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint disparity and motion estimation using optical flow for multiview Distributed Video Coding.\n \n \n \n \n\n\n \n Salmistraro, M.; Rakêt, L. L.; Brites, C.; Ascenso, J.; and Forchhammer, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 286-290, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952056,\n  author = {M. Salmistraro and L. L. Rakêt and C. Brites and J. Ascenso and S. Forchhammer},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Joint disparity and motion estimation using optical flow for multiview Distributed Video Coding},\n  year = {2014},\n  pages = {286-290},\n  abstract = {Distributed Video Coding (DVC) is a video coding paradigm where the source statistics are exploited at the decoder based on the availability of Side Information (SI). In a monoview video codec, the SI is generated by exploiting the temporal redundancy of the video, through motion estimation and compensation techniques. In a multiview scenario, the correlation between views can also be exploited to further enhance the overall Rate-Distortion (RD) performance. Thus, to generate SI in a multiview distributed coding scenario, a joint disparity and motion estimation technique is proposed, based on optical flow. The proposed SI generation algorithm allows for RD improvements up to 10% (Bjøntegaard) in bit-rate savings, when compared with block-based SI generation algorithms leveraging temporal and inter-view redundancies.},\n  keywords = {decoding;image sequences;motion compensation;motion estimation;rate distortion theory;statistical analysis;video codecs;video coding;optical flow;multiview distributed video coding;DVC;source statistics;side information;SI;monoview video codec;video temporal redundancy;motion compensation techniques;rate-distortion performance;RD performance;joint disparity-motion estimation technique;bit-rate savings;block-based SI generation algorithms;inter-view redundancy;decoder;Silicon;Video coding;Decoding;Estimation;Interpolation;Joints;Image motion analysis;Distributed Video Coding;Multiview Video;Disparity Estimation;Motion Estimation;Optical Flow},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925235.pdf},\n}\n\n
\n
\n\n\n
\n Distributed Video Coding (DVC) is a video coding paradigm where the source statistics are exploited at the decoder based on the availability of Side Information (SI). In a monoview video codec, the SI is generated by exploiting the temporal redundancy of the video, through motion estimation and compensation techniques. In a multiview scenario, the correlation between views can also be exploited to further enhance the overall Rate-Distortion (RD) performance. Thus, to generate SI in a multiview distributed coding scenario, a joint disparity and motion estimation technique is proposed, based on optical flow. The proposed SI generation algorithm allows for RD improvements up to 10% (Bjøntegaard) in bit-rate savings, when compared with block-based SI generation algorithms leveraging temporal and inter-view redundancies.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Two-dimensional non-separable block-lifting-based M-channel biorthogonal filter banks.\n \n \n \n \n\n\n \n Suzuki, T.; and Kudo, H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 291-295, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Two-dimensionalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952057,\n  author = {T. Suzuki and H. Kudo},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Two-dimensional non-separable block-lifting-based M-channel biorthogonal filter banks},\n  year = {2014},\n  pages = {291-295},\n  abstract = {We propose a two-dimensional (2D) non-separable block-lifting structure (NSBL) that is easily formulated from the one-dimensional (1D) separable block-lifting structure (SBL) and 2D non-separable lifting structure (NSL). The NSBL can be regarded as an extension of the NSL because a two-channel NSBL is completely equivalent to a NSL. We apply the NSBL toM-channel (M = 2n, n ∈ ℕ) biorthogonal filter banks (BOFBs). The NSBL-based BOFBs (NSBL-BOFBs) outperform SBL-based BOFBs (SBL-BOFBs) at lossy-to-lossless coding, whose image quality is scalable from lossless data to high compressed lossy data, because their rounding error is reduced by merging many rounding operations, i.e., the number of the NSBL is the almost half that of the SBL.},\n  keywords = {channel bank filters;two-dimensional nonseparable block-lifting;M-channel biorthogonal filter banks;two-dimensional nonseparable block-lifting structure;one-dimensional separable block-lifting structure;2D nonseparable lifting structure;NSBL;SBL-based BOFB;lossless data;compressed lossy data;Image coding;Yttrium;Discrete wavelet transforms;Transform coding;Merging;Encoding;Biorthogonal filter bank (BOFB);lossy-to-lossless image coding;non-separable block-lifting structure (NSBL)},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569907017.pdf},\n}\n\n
\n
\n\n\n
\n We propose a two-dimensional (2D) non-separable block-lifting structure (NSBL) that is easily formulated from the one-dimensional (1D) separable block-lifting structure (SBL) and 2D non-separable lifting structure (NSL). The NSBL can be regarded as an extension of the NSL because a two-channel NSBL is completely equivalent to a NSL. We apply the NSBL toM-channel (M = 2n, n ∈ ℕ) biorthogonal filter banks (BOFBs). The NSBL-based BOFBs (NSBL-BOFBs) outperform SBL-based BOFBs (SBL-BOFBs) at lossy-to-lossless coding, whose image quality is scalable from lossless data to high compressed lossy data, because their rounding error is reduced by merging many rounding operations, i.e., the number of the NSBL is the almost half that of the SBL.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Efficient quantization parameter estimation in HEVC based on ρ-domain.\n \n \n \n\n\n \n Biatek, T.; Raulet, M.; Travers, J. -.; and Deforges, O.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 296-300, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952058,\n  author = {T. Biatek and M. Raulet and J. -. Travers and O. Deforges},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient quantization parameter estimation in HEVC based on ρ-domain},\n  year = {2014},\n  pages = {296-300},\n  abstract = {This paper proposes a quantization parameter estimation algorithm for HEVC CTU rate control. Several methods were proposed, mostly based on Lagrangian optimization combined with Laplacian distribution for transformed coefficients. These methods are accurate but increase the encoder complexity. This paper provides an innovative reduced complexity algorithm based on a ρ-domain rate model. Indeed, for each CTU, the algorithm predicts encoding parameters based on co-located CTU. By combining it with Laplacian distribution for transformed coefficients, we obtain the dead-zone boundary for quantization and the related quantization parameter. Experiments in the HEVC HM Reference Software show a good accuracy with only a 3% average bitrate error and no PSNR deterioration for random-access configuration.},\n  keywords = {optimisation;parameter estimation;quantisation (signal);quantization parameter estimation algorithm;HEVC CTU rate control;Lagrangian optimization;Laplacian distribution;encoder complexity;innovative reduced complexity algorithm;HEVC HM Reference Software;random-access configuration;ρ-domain rate model;dead-zone boundary;Encoding;Bit rate;Laplace equations;Quantization (signal);Video coding;Equations;Complexity theory;HEVC;Rate-Control;ρ-Domain},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n This paper proposes a quantization parameter estimation algorithm for HEVC CTU rate control. Several methods were proposed, mostly based on Lagrangian optimization combined with Laplacian distribution for transformed coefficients. These methods are accurate but increase the encoder complexity. This paper provides an innovative reduced complexity algorithm based on a ρ-domain rate model. Indeed, for each CTU, the algorithm predicts encoding parameters based on co-located CTU. By combining it with Laplacian distribution for transformed coefficients, we obtain the dead-zone boundary for quantization and the related quantization parameter. Experiments in the HEVC HM Reference Software show a good accuracy with only a 3% average bitrate error and no PSNR deterioration for random-access configuration.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Source localization and signal reconstruction in a reverberant field using the FDTD method.\n \n \n \n \n\n\n \n Antonello, N.; Van Waterschoot, T.; Moonen, M.; and Naylor, P. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 301-305, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SourcePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952059,\n  author = {N. Antonello and T. {Van Waterschoot} and M. Moonen and P. A. Naylor},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Source localization and signal reconstruction in a reverberant field using the FDTD method},\n  year = {2014},\n  pages = {301-305},\n  abstract = {Numerical methods applied to room acoustics are usually employed to predict the sound pressure at certain positions generated by a known source. In this paper the inverse problem is studied: given a number of microphones placed in a room, the sound pressure is known at these positions and this information may be used to perform a localization and signal reconstruction of the sound source. The source is assumed to be spatially sparse meaning it can be modeled as a point source. The finite difference time domain method is used to model the acoustics of a simple two dimensional square room and its matrix formulation is presented. A two step method is proposed. First a convex optimization problem is solved to localize the source while exploiting its spatial sparsity. Once its position is known the source signal can be reconstructed by solving an overdetermined system of linear equations.},\n  keywords = {finite difference time-domain analysis;inverse problems;matrix algebra;optimisation;signal reconstruction;source localization;signal reconstruction;reverberant field;FDTD method;inverse problem;microphones;finite difference time domain method;two dimensional square room;matrix formulation;two step method;convex optimization problem;spatial sparsity;source signal;linear equations;Finite difference methods;Microphones;Time-domain analysis;Equations;Vectors;Mathematical model;Signal reconstruction;Room acoustics;FDTD;source localization;source reconstruction;sparse approximation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922233.pdf},\n}\n\n
\n
\n\n\n
\n Numerical methods applied to room acoustics are usually employed to predict the sound pressure at certain positions generated by a known source. In this paper the inverse problem is studied: given a number of microphones placed in a room, the sound pressure is known at these positions and this information may be used to perform a localization and signal reconstruction of the sound source. The source is assumed to be spatially sparse meaning it can be modeled as a point source. The finite difference time domain method is used to model the acoustics of a simple two dimensional square room and its matrix formulation is presented. A two step method is proposed. First a convex optimization problem is solved to localize the source while exploiting its spatial sparsity. Once its position is known the source signal can be reconstructed by solving an overdetermined system of linear equations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Real-time localization of multiple audio sources in a wireless acoustic sensor network.\n \n \n \n \n\n\n \n Griffin, A.; Alexandridis, A.; Pavlidi, D.; and Mouchtaris, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 306-310, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Real-timePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952060,\n  author = {A. Griffin and A. Alexandridis and D. Pavlidi and A. Mouchtaris},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Real-time localization of multiple audio sources in a wireless acoustic sensor network},\n  year = {2014},\n  pages = {306-310},\n  abstract = {In this work we propose a grid-based method to estimate the location of multiple sources in a wireless acoustic sensor network, where each sensor node contains a microphone array and only transmits direction-of-arrival (DOA) estimates in each time interval, minimizing the transmissions to the central processing node. We present new work on modeling the DOA estimation error in such a scenario. Through extensive, realistic simulations, we show our method outperforms other state-of-the-art methods, in both accuracy and complexity. We present localization results of real recordings in an outdoor cell of a sensor network.},\n  keywords = {acoustic signal processing;direction-of-arrival estimation;microphone arrays;wireless sensor networks;outdoor cell;central processing node;DOA estimation;direction-of-arrival estimation;microphone array;grid-based method;wireless acoustic sensor network;multiple audio sources;real-time localization;Direction-of-arrival estimation;Signal to noise ratio;Estimation error;Arrays;Microphones;Accuracy;Acoustic sensors;acoustic source localization;location estimation;microphone arrays;wireless acoustic sensor networks},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925575.pdf},\n}\n\n
\n
\n\n\n
\n In this work we propose a grid-based method to estimate the location of multiple sources in a wireless acoustic sensor network, where each sensor node contains a microphone array and only transmits direction-of-arrival (DOA) estimates in each time interval, minimizing the transmissions to the central processing node. We present new work on modeling the DOA estimation error in such a scenario. Through extensive, realistic simulations, we show our method outperforms other state-of-the-art methods, in both accuracy and complexity. We present localization results of real recordings in an outdoor cell of a sensor network.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Audio source separation using multiple deformed references.\n \n \n \n \n\n\n \n Souviraà-Labastie, N.; Olivero, A.; Vincent, E.; and Bimbot, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 311-315, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AudioPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952061,\n  author = {N. {Souviraà-Labastie} and A. Olivero and E. Vincent and F. Bimbot},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Audio source separation using multiple deformed references},\n  year = {2014},\n  pages = {311-315},\n  abstract = {This paper deals with audio source separation guided by multiple audio references. We present a general framework where additional audio references for one or more sources of a given mixture are available. Each audio reference is another mixture which is supposed to contain at least one source similar to one of the target sources. Deformations between the sources of interest and their references are modeled in a general manner. A nonnegative matrix co-factorization algorithm is used which allows sharing of information between the considered mixtures. We run our algorithm on music plus voice mixtures with music and/or voice references. Applied on movies and TV series data, our algorithm improves the signal-to-distortion ratio (SDR) of the sources with the lowest intensity by 9 to 12 decibels with respect to original mixture.},\n  keywords = {audio signal processing;matrix decomposition;source separation;audio source separation;multiple audio deformed references;nonnegative matrix co-factorization algorithm;music plus voice mixtures;voice references;signal-to-distortion ratio;SDR;information sharing;Speech;Source separation;Motion pictures;Deformable models;TV;Adaptation models;Matrix decomposition;Guided audio source separation;nonnegative matrix co-factorization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922269.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with audio source separation guided by multiple audio references. We present a general framework where additional audio references for one or more sources of a given mixture are available. Each audio reference is another mixture which is supposed to contain at least one source similar to one of the target sources. Deformations between the sources of interest and their references are modeled in a general manner. A nonnegative matrix co-factorization algorithm is used which allows sharing of information between the considered mixtures. We run our algorithm on music plus voice mixtures with music and/or voice references. Applied on movies and TV series data, our algorithm improves the signal-to-distortion ratio (SDR) of the sources with the lowest intensity by 9 to 12 decibels with respect to original mixture.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n NMF with spectral and temporal continuity criteria for monaural sound source separation.\n \n \n \n \n\n\n \n Becker, J. M.; Sohn, C.; and Rohlfing, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 316-320, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"NMFPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952062,\n  author = {J. M. Becker and C. Sohn and C. Rohlfing},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {NMF with spectral and temporal continuity criteria for monaural sound source separation},\n  year = {2014},\n  pages = {316-320},\n  abstract = {Nonnegative Matrix Factorization (NMF) is a well suited and widely used method for monaural sound source separation. It has been shown, that an additional cost term supporting temporal continuity can improve the separation quality [1]. We extend this model by adding a cost term, that penalizes large variations in the spectral dimension. We propose two different cost terms for this purpose and also propose a new cost term for temporal continuity. We evaluate these cost terms on different mixtures of samples of pitched instruments, drum sounds and other acoustical signals. Our results show, that penalizing large spectral variations can improve separation quality. The results also show, that our alternative temporal continuity cost term leads to better separation results than the temporal continuity cost term proposed in [1].},\n  keywords = {audio signal processing;matrix decomposition;source separation;monaural sound source separation;NMF;nonnegative matrix factorization;temporal continuity criteria;spectral continuity criteria;cost term;separation quality improvement;spectral dimension;pitched instruments;drum sounds;acoustical signals;audio source separation;Source separation;Spectrogram;Vectors;Standards;Cost function;Mathematical model;Speech;audio source separation;nonnegative matrix factorization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921699.pdf},\n}\n\n
\n
\n\n\n
\n Nonnegative Matrix Factorization (NMF) is a well suited and widely used method for monaural sound source separation. It has been shown, that an additional cost term supporting temporal continuity can improve the separation quality [1]. We extend this model by adding a cost term, that penalizes large variations in the spectral dimension. We propose two different cost terms for this purpose and also propose a new cost term for temporal continuity. We evaluate these cost terms on different mixtures of samples of pitched instruments, drum sounds and other acoustical signals. Our results show, that penalizing large spectral variations can improve separation quality. The results also show, that our alternative temporal continuity cost term leads to better separation results than the temporal continuity cost term proposed in [1].\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A binaural hearing aid speech enhancement method maintaining spatial awareness for the user.\n \n \n \n \n\n\n \n Thiemann, J.; Müller, M.; and Van De Par, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 321-325, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952063,\n  author = {J. Thiemann and M. Müller and S. {Van De Par}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A binaural hearing aid speech enhancement method maintaining spatial awareness for the user},\n  year = {2014},\n  pages = {321-325},\n  abstract = {Multi-channel hearing aids can use directional algorithms to enhance speech signals based on their spatial location. In the case where a hearing aid user is fitted with a binaural hearing aid, it is important that the binaural cues are kept intact, such that the user does not loose spatial awareness, the ability to localize sounds, or the benefits of spatial unmasking. Typically algorithms focus on rendering the source of interest in the correct spatial location, but degrade all other source positions in the auditory scene. In this paper, we present an algorithm that uses a binary mask such that the target signal is enhanced but the background noise remains unmodified except for an attenuation. We also present two variations of the algorithm, and in initial evaluations find that this type of mask-based processing has promising performance.},\n  keywords = {hearing aids;speech enhancement;background noise;target signal;binary mask;speech signals;directional algorithms;spatial awareness;binaural hearing aid speech enhancement method;Microphones;Speech;Noise;Auditory system;Ear;Hearing aids;Noise measurement;Hearing Aids;Spatial Rendering;Speech Enhancement;Beamforming},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922137.pdf},\n}\n\n
\n
\n\n\n
\n Multi-channel hearing aids can use directional algorithms to enhance speech signals based on their spatial location. In the case where a hearing aid user is fitted with a binaural hearing aid, it is important that the binaural cues are kept intact, such that the user does not loose spatial awareness, the ability to localize sounds, or the benefits of spatial unmasking. Typically algorithms focus on rendering the source of interest in the correct spatial location, but degrade all other source positions in the auditory scene. In this paper, we present an algorithm that uses a binary mask such that the target signal is enhanced but the background noise remains unmodified except for an attenuation. We also present two variations of the algorithm, and in initial evaluations find that this type of mask-based processing has promising performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-polarized multi-user Massive MIMO: Precoder design and performance analysis.\n \n \n \n \n\n\n \n Park, J.; and Clerckx, B.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 326-330, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-polarizedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952064,\n  author = {J. Park and B. Clerckx},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-polarized multi-user Massive MIMO: Precoder design and performance analysis},\n  year = {2014},\n  pages = {326-330},\n  abstract = {Space limitation and multi-antenna channel acquisition prevent Massive multiple-input multiple-output (MIMO) from being easily deployed. The use of multi-polarized antennas can be one solution to alleviate the first obstacle. Furthermore, the dual structured precoding, in which a preprocessing based on the spatial correlation and a subsequent linear precoding based on the short-term channel state information at the transmitter (CSIT) are concatenated, can reduce the feedback overhead efficiently. To reduce the feedback overhead further, we propose a dual structured multi-user linear precoding, in which the subgrouping method based on co-polarization is additionally applied to the spatially grouped mobile stations (MSs) in the preprocessing stage. By investigating the behavior of the asymptotic performance as a function of the cross-polar discrimination (XPD) parameter, we also propose a new dual structured precoding, in which the XPD, spatial correlation, and CSIT quality are jointly utilized in the precoding/feedback for the multi-polarized multi-user massive MIMO system.},\n  keywords = {antenna arrays;MIMO communication;multiuser channels;precoding;radio transmitters;XPD parameter;cross-polar discrimination parameter;spatially grouped mobile stations;subgrouping method;dual structured multiuser linear precoding;feedback overhead;CSIT;radio transmitter;short-term channel state information;subsequent linear precoding;spatial correlation;multipolarized antennas;multiantenna channel acquisition;space limitation;performance analysis;precoder design;multipolarized multiuser massive MIMO;Antennas;MIMO;Correlation;Covariance matrices;Signal to noise ratio;Interference;Vectors;Multi-polarized Massive MIMO;Dual structured precoding with long-term/short-term CSIT},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569916521.pdf},\n}\n\n
\n
\n\n\n
\n Space limitation and multi-antenna channel acquisition prevent Massive multiple-input multiple-output (MIMO) from being easily deployed. The use of multi-polarized antennas can be one solution to alleviate the first obstacle. Furthermore, the dual structured precoding, in which a preprocessing based on the spatial correlation and a subsequent linear precoding based on the short-term channel state information at the transmitter (CSIT) are concatenated, can reduce the feedback overhead efficiently. To reduce the feedback overhead further, we propose a dual structured multi-user linear precoding, in which the subgrouping method based on co-polarization is additionally applied to the spatially grouped mobile stations (MSs) in the preprocessing stage. By investigating the behavior of the asymptotic performance as a function of the cross-polar discrimination (XPD) parameter, we also propose a new dual structured precoding, in which the XPD, spatial correlation, and CSIT quality are jointly utilized in the precoding/feedback for the multi-polarized multi-user massive MIMO system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reduced-rank widely linear precoding in Massive MIMO systems with I/Q imbalance.\n \n \n \n \n\n\n \n Zhang, W.; De Lamare, R. C.; and Chen, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 331-335, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Reduced-rankPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952065,\n  author = {W. Zhang and R. C. {De Lamare} and M. Chen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Reduced-rank widely linear precoding in Massive MIMO systems with I/Q imbalance},\n  year = {2014},\n  pages = {331-335},\n  abstract = {We present reduced-rank widely linear precoding algorithms for Massive MIMO systems with I/Q imbalance (IQI). With a large number of transmit antennas, the imperfection I/Q branches at the transmitter has a significant impact on the downlink performance. We develop linear precoding techniques using an equivalent real-valued model to mitigate IQI and multiuser interference. In order to reduce the computational complexity required by the matrix inverse, a widely linear reduced-rank precoding strategy based on the Krylov subspace (KS) is devised. Simulation results show that the proposed methods work well under IQI, and the KS precoding algorithm performs almost as well as the full-rank precoder while requiring much lower complexity.},\n  keywords = {computational complexity;matrix inversion;MIMO communication;multiuser channels;precoding;radiofrequency interference;transmitting antennas;reduced-rank widely linear precoding algorithms;massive MIMO systems;I-Q imbalance;transmit antennas;downlink performance;equivalent real-valued model;IQI interference mitigation;multiuser interference mitigation;computational complexity reduction;matrix inverse;Krylov subspace;KS precoding algorithm;MIMO;Complexity theory;Signal to noise ratio;Vectors;Antennas;Downlink;Bit error rate;widely linear precoding;Krylov subspace;I/Q imbalance;Massive MIMO},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923015.pdf},\n}\n\n
\n
\n\n\n
\n We present reduced-rank widely linear precoding algorithms for Massive MIMO systems with I/Q imbalance (IQI). With a large number of transmit antennas, the imperfection I/Q branches at the transmitter has a significant impact on the downlink performance. We develop linear precoding techniques using an equivalent real-valued model to mitigate IQI and multiuser interference. In order to reduce the computational complexity required by the matrix inverse, a widely linear reduced-rank precoding strategy based on the Krylov subspace (KS) is devised. Simulation results show that the proposed methods work well under IQI, and the KS precoding algorithm performs almost as well as the full-rank precoder while requiring much lower complexity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Flexible coordinated beamforming with lattice reduction for multi-user Massive MIMO systems.\n \n \n \n \n\n\n \n Zu, K.; Song, B.; Haardt, M.; and De Lamare, R. C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 336-340, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"FlexiblePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952066,\n  author = {K. Zu and B. Song and M. Haardt and R. C. {De Lamare}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Flexible coordinated beamforming with lattice reduction for multi-user Massive MIMO systems},\n  year = {2014},\n  pages = {336-340},\n  abstract = {The application of precoding algorithms in multiuser massive multiple-input multiple-output (MU-Massive-MIMO) systems is restricted by the dimensionality constraint that the number of transmit antennas has to be greater than or equal to the total number of receive antennas. In this paper, a lattice reduction (LR)-aided flexible coordinated beamforming (LR-FlexCoBF) algorithm is proposed to overcome the dimensionality constraint in overloaded MU-Massive-MIMO systems. A random user selection scheme is integrated with the proposed LR-FlexCoBF to extend its application to MU-Massive-MIMO systems with arbitary overloading levels. Simulation results show that significant improvements in terms of bit error rate (BER) and sum-rate performances can be achieved by the proposed LR-FlexCoBF precoding algorithm.},\n  keywords = {antenna arrays;array signal processing;MIMO communication;precoding;receiving antennas;multiuser massive multiple-input multiple-output system;dimensionality constraint;receive antennas;LR-FlexCoBF algorithm;lattice reduction-aided flexible coordinated beamforming algorithm;overloaded MU-massive-MIMO systems;random user selection scheme;arbitary overloading levels;bit error rate;BER;sum-rate performances;LR-FlexCoBF precoding algorithm;MIMO;Long Term Evolution;Lattices;Radio access networks},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924669.pdf},\n}\n\n
\n
\n\n\n
\n The application of precoding algorithms in multiuser massive multiple-input multiple-output (MU-Massive-MIMO) systems is restricted by the dimensionality constraint that the number of transmit antennas has to be greater than or equal to the total number of receive antennas. In this paper, a lattice reduction (LR)-aided flexible coordinated beamforming (LR-FlexCoBF) algorithm is proposed to overcome the dimensionality constraint in overloaded MU-Massive-MIMO systems. A random user selection scheme is integrated with the proposed LR-FlexCoBF to extend its application to MU-Massive-MIMO systems with arbitary overloading levels. Simulation results show that significant improvements in terms of bit error rate (BER) and sum-rate performances can be achieved by the proposed LR-FlexCoBF precoding algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Decentralized multi-cell beamforming via large system analysis in correlated channels.\n \n \n \n \n\n\n \n Asgharimoghaddam, H.; Tölli, A.; and Rajatheva, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 341-345, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DecentralizedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952067,\n  author = {H. Asgharimoghaddam and A. Tölli and N. Rajatheva},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Decentralized multi-cell beamforming via large system analysis in correlated channels},\n  year = {2014},\n  pages = {341-345},\n  abstract = {The optimal decentralization of multi-cell minimum power beamforming requires exchange of terms related to instantaneous inter-cell interference (ICI) values or channel state information (CSI) via a backhaul link. This limits the achievable performance in the limited backhaul capacity scenarios, especially when dealing with a fast fading scenario or a large number of users and antennas. In this work, we utilize the results from random matrix theory for developing two algorithms based on uplink-downlink duality and optimization decomposition relying on limited cooperation between nodes to share knowledge about channel statistics. As a result, approximately optimal power allocations are achieved based on statistics of the channels with greatly reduced backhaul information exchange rate. The simulations show that the performance gap due to the approximations is small even when the problem dimensions are relatively small.},\n  keywords = {array signal processing;correlation methods;matrix algebra;optimisation;optimal decentralization;multicell minimum power beamforming;large system analysis;correlated channels;instantaneous intercell interference values;ICI values;channel state information;CSI;backhaul link;random matrix theory;uplink-downlink duality;optimization decomposition;channel statistics;optimal power allocations;backhaul information exchange rate reduction;performance gap;Approximation methods;Interference;Approximation algorithms;Signal to noise ratio;Array signal processing;Correlation;Antennas},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926495.pdf},\n}\n\n
\n
\n\n\n
\n The optimal decentralization of multi-cell minimum power beamforming requires exchange of terms related to instantaneous inter-cell interference (ICI) values or channel state information (CSI) via a backhaul link. This limits the achievable performance in the limited backhaul capacity scenarios, especially when dealing with a fast fading scenario or a large number of users and antennas. In this work, we utilize the results from random matrix theory for developing two algorithms based on uplink-downlink duality and optimization decomposition relying on limited cooperation between nodes to share knowledge about channel statistics. As a result, approximately optimal power allocations are achieved based on statistics of the channels with greatly reduced backhaul information exchange rate. The simulations show that the performance gap due to the approximations is small even when the problem dimensions are relatively small.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Generalised spatial modulation for large-scale MIMO.\n \n \n \n \n\n\n \n Younis, A.; Mesleh, R.; Di Renzo, M.; and Haas, H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 346-350, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"GeneralisedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952068,\n  author = {A. Younis and R. Mesleh and M. {Di Renzo} and H. Haas},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Generalised spatial modulation for large-scale MIMO},\n  year = {2014},\n  pages = {346-350},\n  abstract = {In this paper, the performance of generalised spatial modulation (GSM) and spatial modulation (SM) is studied assuming channel estimation errors (CSEs) and correlated Rayleigh and Rician fading channels. A new, simple, accurate and general analytical closed-form upper bound for the average bit error ratio (ABER) performance of both systems is derived. The analytical bound is shown to be applicable to correlated and uncorrelated channels, as well as to small and large scale multiple-input multiple-output (MIMO) systems. The results demonstrate that GSM is more suitable for large-scale MIMO systems than SM. The performance gain of GSM over SM is about 5 dB. The results also show that SM is very robust to CSEs. Specifically, the performance degradation of SM in the presence of CSEs are 0.7 dB and 0.3 dB for Rayleigh and Rician fading channels respectively. Lastly, the findings in this paper underpin the suitability of both GSM and SM for future large-scale MIMO systems.},\n  keywords = {channel estimation;error statistics;MIMO communication;modulation;Rayleigh channels;Rician channels;uncorrelated channels;ABER;average bit error ratio;closed-form upper bound;Rician fading channel;correlated Rayleigh channel;CSE;channel estimation errors;GSM;generalised spatial modulation;large-scale MIMO systems;multiple-input multiple-output systems;GSM;Channel estimation;Modulation;Fading;Rician channels;Transmitting antennas;Generalised spatial modulation (GSM);spatial modulation (SM);multiple-input multiple-output (MIMO);large scale MIMO;channel estimation errors (CSEs)},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925899.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, the performance of generalised spatial modulation (GSM) and spatial modulation (SM) is studied assuming channel estimation errors (CSEs) and correlated Rayleigh and Rician fading channels. A new, simple, accurate and general analytical closed-form upper bound for the average bit error ratio (ABER) performance of both systems is derived. The analytical bound is shown to be applicable to correlated and uncorrelated channels, as well as to small and large scale multiple-input multiple-output (MIMO) systems. The results demonstrate that GSM is more suitable for large-scale MIMO systems than SM. The performance gain of GSM over SM is about 5 dB. The results also show that SM is very robust to CSEs. Specifically, the performance degradation of SM in the presence of CSEs are 0.7 dB and 0.3 dB for Rayleigh and Rician fading channels respectively. Lastly, the findings in this paper underpin the suitability of both GSM and SM for future large-scale MIMO systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Interactive games for preservation and promotion of sporting movements.\n \n \n \n \n\n\n \n O'Connor, N. E.; Tisserand, Y.; Chatzitofis, A.; Destelle, F.; Goenetxea, J.; Unzueta, L.; Zarpalas, D.; Daras, P.; Linaza, M.; Moran, K.; and Magnenat Thalmann, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 351-355, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"InteractivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952069,\n  author = {N. E. O'Connor and Y. Tisserand and A. Chatzitofis and F. Destelle and J. Goenetxea and L. Unzueta and D. Zarpalas and P. Daras and M. Linaza and K. Moran and N. {Magnenat Thalmann}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Interactive games for preservation and promotion of sporting movements},\n  year = {2014},\n  pages = {351-355},\n  abstract = {In this paper we describe two interactive applications for capturing the motion signatures associated with key skills of traditional sports and games. We first present the case for sport as an important example of intangible cultural heritage. We then explain that sport requires special consideration in terms of digitization for preservation as the key aspects to be digitized are the characteristic movement signatures of such sports. We explain that, given the nature of traditional sporting agencies, this requires low-cost motion capture technology. Furthermore we argue that in order to ensure ongoing preservation, this should be provided via fun interactive gaming scenarios that promote uptake of the sports, particularly among children. We then present two such games that we have developed and illustrate their performance.},\n  keywords = {computer games;history;image motion analysis;sport;virtual reality;interactive games;sporting movement preservation;sporting movement promotion;motion signature capture;intangible cultural heritage;characteristic movement signatures;low-cost motion capture technology;virtual reality;motion comparison;Games;Three-dimensional displays;Cultural differences;Joints;Engines;Animation;Low cost motion capture;motion comparison;virtual reality;digital preservation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921297.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we describe two interactive applications for capturing the motion signatures associated with key skills of traditional sports and games. We first present the case for sport as an important example of intangible cultural heritage. We then explain that sport requires special consideration in terms of digitization for preservation as the key aspects to be digitized are the characteristic movement signatures of such sports. We explain that, given the nature of traditional sporting agencies, this requires low-cost motion capture technology. Furthermore we argue that in order to ensure ongoing preservation, this should be provided via fun interactive gaming scenarios that promote uptake of the sports, particularly among children. We then present two such games that we have developed and illustrate their performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 3DLive: A multi-modal sensing platform allowing tele-immersive sports applications.\n \n \n \n \n\n\n \n Poussard, B.; Richir, S.; Vatjus-Anttila, J.; Asteriadis, S.; Zarpalas, D.; and Daras, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 356-360, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"3DLive:Paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952070,\n  author = {B. Poussard and S. Richir and J. Vatjus-Anttila and S. Asteriadis and D. Zarpalas and P. Daras},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {3DLive: A multi-modal sensing platform allowing tele-immersive sports applications},\n  year = {2014},\n  pages = {356-360},\n  abstract = {3DLive project is developing a user-driven mixed reality platform, intended for augmented sports. Using latest sensing techniques, 3DLive will allow remote users to share a three-dimensional sports experience, interacting with each other in a mixed reality space. This paper presents the multi-modal sensing technologies used in the platform. 3DLive aims at delivering a high sense of tele-immersion among remote users, regardless of whether they are indoors or outdoors, in the context of augmented sports. In this paper, functional and technical details of the first prototype of the jogging scenario are presented, while a clear separation between indoor and outdoor users is given, since different technologies need to be employed for each case.},\n  keywords = {augmented reality;sport;3DLive;multimodal sensing platform;teleimmersive sports applications;user-driven mixed reality platform;augmented sports;3D sports experience;Sensors;Cameras;Tracking;Real-time systems;Joints;Prototypes;Global Positioning System;Augmented Sports;Motion capture;Tele-Immersion;Activity assessment;3DLive},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921619.pdf},\n}\n\n
\n
\n\n\n
\n 3DLive project is developing a user-driven mixed reality platform, intended for augmented sports. Using latest sensing techniques, 3DLive will allow remote users to share a three-dimensional sports experience, interacting with each other in a mixed reality space. This paper presents the multi-modal sensing technologies used in the platform. 3DLive aims at delivering a high sense of tele-immersion among remote users, regardless of whether they are indoors or outdoors, in the context of augmented sports. In this paper, functional and technical details of the first prototype of the jogging scenario are presented, while a clear separation between indoor and outdoor users is given, since different technologies need to be employed for each case.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Viewpoint-dependent 3D human body posing for sports legacy recovery from images and video.\n \n \n \n \n\n\n \n Unzueta, L.; Goenetxea, J.; Rodriguez, M.; and Linaza, M. T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 361-365, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Viewpoint-dependentPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952071,\n  author = {L. Unzueta and J. Goenetxea and M. Rodriguez and M. T. Linaza},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Viewpoint-dependent 3D human body posing for sports legacy recovery from images and video},\n  year = {2014},\n  pages = {361-365},\n  abstract = {In this paper we present a method for 3D human body pose reconstruction from images and video, in the context of sports legacy recovery. The video and image legacy content can include camera motion, several players, considerable partial occlusions, motion blur and image noise, recorded with non-calibrated cameras, which increases even more the difficulty of solving the problem of 3D reconstruction from 2D data. Therefore, we propose a semi-automatic approach in which a set of 2D key-points are manually marked in key-frames and then an automatic process estimates the camera calibration parameters, the positions and poses of the players and their body part dimensions. In-between frames are automatically estimated taking into account constraints related to human kinematics and collisions with the environment. Experimental results show that this approach obtains reconstructions that can help to analyze playing techniques and the evolution of sports through time.},\n  keywords = {cameras;image reconstruction;pose estimation;sport;video signal processing;viewpoint-dependent 3D human body pose reconstruction;sports legacy recovery;image reconstruction;video legacy content;image legacy content;camera motion;partial occlusions;motion blur;image noise;noncalibrated cameras;2D data;semiautomatic approach;automatic process estimation;camera calibration parameters;human kinematics;playing technique analysis;Three-dimensional displays;Cameras;Calibration;Kinematics;Floors;Solid modeling;TV;Motion capture;human body posing;multibody mechanism fitting;sports preservation and promotion},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925499.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we present a method for 3D human body pose reconstruction from images and video, in the context of sports legacy recovery. The video and image legacy content can include camera motion, several players, considerable partial occlusions, motion blur and image noise, recorded with non-calibrated cameras, which increases even more the difficulty of solving the problem of 3D reconstruction from 2D data. Therefore, we propose a semi-automatic approach in which a set of 2D key-points are manually marked in key-frames and then an automatic process estimates the camera calibration parameters, the positions and poses of the players and their body part dimensions. In-between frames are automatically estimated taking into account constraints related to human kinematics and collisions with the environment. Experimental results show that this approach obtains reconstructions that can help to analyze playing techniques and the evolution of sports through time.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Articulated human motion tracking with foreground learning.\n \n \n \n \n\n\n \n Zhu, A.; Snoussi, H.; and Cherouat, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 366-370, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ArticulatedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952072,\n  author = {A. Zhu and H. Snoussi and A. Cherouat},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Articulated human motion tracking with foreground learning},\n  year = {2014},\n  pages = {366-370},\n  abstract = {Tracking the articulated human body is a challenging computer vision problem because of changes in body poses and their appearance. Pictorial structure (PS) models are widely used in 2D human pose estimation. In this work, we extend the PS models for robust 3D pose estimation, which includes two stages: multi-view human body parts detection by foreground learning and pose states updating by annealed particle filter (APF) and detection. Moreover, the image dataset F-PARSE was built for foreground training and flexible mixture of parts (FMP) model was used for foreground learning. Experimental results demonstrate the effectiveness of our foreground learning-based method.},\n  keywords = {computer vision;image motion analysis;learning (artificial intelligence);object detection;object tracking;particle filtering (numerical methods);pose estimation;articulated human motion tracking;computer vision problem;pictorial structure model;body poses;2D human pose estimation;robust 3D pose estimation;PS models;multiview human body part detection;pose states;annealed particle filter;APF;image dataset F-PARSE;flexible mixture of part model;FMP model;foreground training;foreground learning-based method;Tracking;Three-dimensional displays;Estimation;Biological system modeling;Annealing;Vectors;Solid modeling;Annealed particle filter;human motion tracking;foreground learning},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925365.pdf},\n}\n\n
\n
\n\n\n
\n Tracking the articulated human body is a challenging computer vision problem because of changes in body poses and their appearance. Pictorial structure (PS) models are widely used in 2D human pose estimation. In this work, we extend the PS models for robust 3D pose estimation, which includes two stages: multi-view human body parts detection by foreground learning and pose states updating by annealed particle filter (APF) and detection. Moreover, the image dataset F-PARSE was built for foreground training and flexible mixture of parts (FMP) model was used for foreground learning. Experimental results demonstrate the effectiveness of our foreground learning-based method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-cost accurate skeleton tracking based on fusion of kinect and wearable inertial sensors.\n \n \n \n \n\n\n \n Destelle, F.; Ahmadi, A.; O'Connor, N. E.; Moran, K.; Chatzitofis, A.; Zarpalas, D.; and Daras, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 371-375, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Low-costPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952093,\n  author = {F. Destelle and A. Ahmadi and N. E. O'Connor and K. Moran and A. Chatzitofis and D. Zarpalas and P. Daras},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Low-cost accurate skeleton tracking based on fusion of kinect and wearable inertial sensors},\n  year = {2014},\n  pages = {371-375},\n  abstract = {In this paper, we present a novel multi-sensor fusion method to build a human skeleton. We propose to fuse the joint position information obtained from the popular Kinect sensor with more precise estimation of body segment orientations provided by a small number of wearable inertial sensors. The use of inertial sensors can help to address many of the well known limitations of the Kinect sensor. The precise calculation of joint angles potentially allows the quantification of movement errors in technique training, thus facilitating the use of the low-cost Kinect sensor for accurate biomechanical purposes e.g. the improved human skeleton could be used in visual feedback-guided motor learning, for example. We compare our system to the gold standard Vicon optical motion capture system, proving that the fused skeleton achieves a very high level of accuracy.},\n  keywords = {biomechanics;bone;motion estimation;orthopaedics;sensor fusion;gold standard Vicon optical motion capture system;visual feedback-guided motor learning;biomechanical purposes;technique training;movement errors;body segment orientations;Kinect sensor;joint position information;human skeleton;multisensor fusion;wearable inertial sensors;low-cost accurate skeleton tracking;Joints;Knee;Bones;Sensor fusion;Biomechanics;Kinect;Inertial sensor;Motion capture;Skeleton tracking;Multi-sensor fusion},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925399.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present a novel multi-sensor fusion method to build a human skeleton. We propose to fuse the joint position information obtained from the popular Kinect sensor with more precise estimation of body segment orientations provided by a small number of wearable inertial sensors. The use of inertial sensors can help to address many of the well known limitations of the Kinect sensor. The precise calculation of joint angles potentially allows the quantification of movement errors in technique training, thus facilitating the use of the low-cost Kinect sensor for accurate biomechanical purposes e.g. the improved human skeleton could be used in visual feedback-guided motor learning, for example. We compare our system to the gold standard Vicon optical motion capture system, proving that the fused skeleton achieves a very high level of accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Normalized Recursive Least Moduli algorithm with p-modulus of error and q-norm of filter input.\n \n \n \n \n\n\n \n Koike, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 376-380, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"NormalizedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952094,\n  author = {S. Koike},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Normalized Recursive Least Moduli algorithm with p-modulus of error and q-norm of filter input},\n  year = {2014},\n  pages = {376-380},\n  abstract = {This paper proposes a new adaptation algorithm named Normalized Recursive Least Moduli (NRLM) algorithm which employs “p-modulus” of error and “q-norm” of filter input. p-modulus and q-norm are generalization of the modulus and norm used in complex-domain adaptive filters. The NRLM algorithm with p-modulus and q-norm makes adaptive filters fast convergent and robust against two types of impulse noise: one is found in observation noise and another at filter input. We develop theoretical analysis of the algorithm for calculating filter convergence. Through experiment with simulations and theoretical calculations, effectiveness of the proposed algorithm is demonstrated. We also find that the filter convergence does not critically depend on the value of p or q, allowing use of p = q = infinity that makes it easiest to calculate the p-modulus and q-norm. The theoretical convergence is in good agreement with the simulation results which validates the analysis.},\n  keywords = {adaptive filters;recursive estimation;filter convergence;observation noise;complex domain adaptive filters;filter input;q-norm;p-modulus;NRLM algorithm;normalized recursive least moduli algorithm;Adaptive filters;Noise;Filtering algorithms;Convergence;Algorithm design and analysis;Signal processing algorithms;Robustness;Adaptive filter;recursive least estimation;impulse noise;modulus;norm},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569902585.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a new adaptation algorithm named Normalized Recursive Least Moduli (NRLM) algorithm which employs “p-modulus” of error and “q-norm” of filter input. p-modulus and q-norm are generalization of the modulus and norm used in complex-domain adaptive filters. The NRLM algorithm with p-modulus and q-norm makes adaptive filters fast convergent and robust against two types of impulse noise: one is found in observation noise and another at filter input. We develop theoretical analysis of the algorithm for calculating filter convergence. Through experiment with simulations and theoretical calculations, effectiveness of the proposed algorithm is demonstrated. We also find that the filter convergence does not critically depend on the value of p or q, allowing use of p = q = infinity that makes it easiest to calculate the p-modulus and q-norm. The theoretical convergence is in good agreement with the simulation results which validates the analysis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dual-layer network representation exploiting information characterization.\n \n \n \n \n\n\n \n Bernardinis, V. D.; Fa, R.; Carli, M.; and Nandi, A. K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 381-385, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Dual-layerPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952095,\n  author = {V. D. Bernardinis and R. Fa and M. Carli and A. K. Nandi},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Dual-layer network representation exploiting information characterization},\n  year = {2014},\n  pages = {381-385},\n  abstract = {In this paper, a logical dual-layer representation approach is proposed to facilitate the analysis of directed and weighted complex networks. Unlike the single logical layer structure, which was widely used for the directed and weighted flow graph, the proposed approach replaces the single layer with a dual-layer structure, which introduces a provider layer and a requester layer. The new structure provides the characterization of the nodes by the information, which they provide to and they request from the network. Its features are explained and its implementation and visualization are also detailed. We also design two clustering methods with different strategies respectively, which provide the analysis from different points of view. The effectiveness of the proposed approach is demonstrated using a simplified example. By comparing the graph layout with the conventional directed graph, the new dual-layer representation reveals deeper insight into the complex networks and provides more opportunities for versatile clustering analysis.},\n  keywords = {directed graphs;pattern clustering;graph layout;clustering methods;requester layer;provider layer;directed flow graph;weighted complex networks;information characterization;logical dual-layer network representation approach;Complex networks;Periodic structures;Blogs;Visualization;Layout;Flow graphs;Clustering methods;Information;dual-layer;characterization;clustering},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569910909.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a logical dual-layer representation approach is proposed to facilitate the analysis of directed and weighted complex networks. Unlike the single logical layer structure, which was widely used for the directed and weighted flow graph, the proposed approach replaces the single layer with a dual-layer structure, which introduces a provider layer and a requester layer. The new structure provides the characterization of the nodes by the information, which they provide to and they request from the network. Its features are explained and its implementation and visualization are also detailed. We also design two clustering methods with different strategies respectively, which provide the analysis from different points of view. The effectiveness of the proposed approach is demonstrated using a simplified example. By comparing the graph layout with the conventional directed graph, the new dual-layer representation reveals deeper insight into the complex networks and provides more opportunities for versatile clustering analysis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improving scalar quantization for correlated processes using adaptive codebooks only at the receiver.\n \n \n \n \n\n\n \n Han, S.; and Fingscheidt, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 386-390, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952096,\n  author = {S. Han and T. Fingscheidt},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Improving scalar quantization for correlated processes using adaptive codebooks only at the receiver},\n  year = {2014},\n  pages = {386-390},\n  abstract = {Lloyd-Max quantization (LMQ) is a widely used scalar non-uniform quantization approach targeting for the minimum mean squared error (MMSE). Once designed, the quantizer codebook is fixed over time and does not take advantage of possible correlations in the input signals. Exploiting correlation in scalar quantization could be achieved by predictive quantization, however, for the price of a higher bit error sensitivity. In order to improve the Lloyd-Max quantizer performance for correlated processes without encoder-sided prediction, a novel scalar decoding approach utilizing the correlation of input signals is proposed in this paper. Based on previously received samples, the current sample can be predicted a priori. Thereafter, a quantization codebook adapted over time will be generated according to the prediction error probability density function. Compared to the standard LMQ, distinct improvement is achieved with our receiver in error-free and error-prone transmission conditions, both with hard-decision and soft-decision decoding.},\n  keywords = {adaptive codes;least mean squares methods;quantisation (signal);correlated process;adaptive codebooks;Lloyd-Max quantization;LMQ;scalar nonuniform quantization approach;minimum mean squared error;MMSE;quantizer codebook;predictive quantization;scalar decoding approach;prediction error probability density function;soft-decision decoding;hard-decision decoding;Decoding;Quantization (signal);Receivers;Standards;Indexes;Correlation;High definition video;Lloyd-Max quantization;correlated process;predictive quantization;probability density function;soft-decision decoding},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921321.pdf},\n}\n\n
\n
\n\n\n
\n Lloyd-Max quantization (LMQ) is a widely used scalar non-uniform quantization approach targeting for the minimum mean squared error (MMSE). Once designed, the quantizer codebook is fixed over time and does not take advantage of possible correlations in the input signals. Exploiting correlation in scalar quantization could be achieved by predictive quantization, however, for the price of a higher bit error sensitivity. In order to improve the Lloyd-Max quantizer performance for correlated processes without encoder-sided prediction, a novel scalar decoding approach utilizing the correlation of input signals is proposed in this paper. Based on previously received samples, the current sample can be predicted a priori. Thereafter, a quantization codebook adapted over time will be generated according to the prediction error probability density function. Compared to the standard LMQ, distinct improvement is achieved with our receiver in error-free and error-prone transmission conditions, both with hard-decision and soft-decision decoding.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Filter design with hard spectral constraints.\n \n \n \n \n\n\n \n Karlsson, J.; Li, J.; and Stoica, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 391-395, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"FilterPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952097,\n  author = {J. Karlsson and J. Li and P. Stoica},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Filter design with hard spectral constraints},\n  year = {2014},\n  pages = {391-395},\n  abstract = {Filter design is a fundamental problem in signal processing and important in many applications. In this paper we consider a communication application with spectral constraints, using filter designs that can be solved globally via convex optimization. Tradeoffs are discussed in order to determine which design is the most appropriate, and for these applications, finite impulse response filters appear to be more suitable than infinite impulse response filters since they allow for more flexible objective functions, shorter transients, and faster filter implementations.},\n  keywords = {convex programming;filtering theory;FIR filters;signal processing;filter design;spectral constraints;convex optimization;finite impulse response filters;Finite impulse response filters;OFDM;IIR filters;Linear programming;Convex functions;Time-frequency analysis;Frequency response;OFDM;filter design;convex optimization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921693.pdf},\n}\n\n
\n
\n\n\n
\n Filter design is a fundamental problem in signal processing and important in many applications. In this paper we consider a communication application with spectral constraints, using filter designs that can be solved globally via convex optimization. Tradeoffs are discussed in order to determine which design is the most appropriate, and for these applications, finite impulse response filters appear to be more suitable than infinite impulse response filters since they allow for more flexible objective functions, shorter transients, and faster filter implementations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Chromatic dispersion compensation using complex-valued all-pass filter.\n \n \n \n \n\n\n \n Munir, J.; Mezghani, A.; Slim, I.; and Nossek, J. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 396-400, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ChromaticPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952098,\n  author = {J. Munir and A. Mezghani and I. Slim and J. A. Nossek},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Chromatic dispersion compensation using complex-valued all-pass filter},\n  year = {2014},\n  pages = {396-400},\n  abstract = {We propose a new optimization framework to compensate chromatic dispersion by complex-valued infinite impulse response (IIR) all-pass filter. The design of the IIR all-pass filter is based on minimizing the mean square error (MSE) in group-delay and phase cost functions. The necessary conditions are derived and incorporated into a multi-step optimization framework to ensure the stability of the resulting IIR filter. It is shown that IIR filter achieves similar or slightly better performance compared to its finite impulse response (FIR) counterpart. Moreover, IIR filtering requires significantly less number of taps to compensate the same CD channel compared to FIR filtering.},\n  keywords = {all-pass filters;compensation;IIR filters;mean square error methods;minimisation;optical fibre communication;optical fibre dispersion;chromatic dispersion compensation;complex-valued infinite impulse response all-pass filter;IIR filter;mean square error minimization;MSE;group-delay;phase cost functions;multistep optimization framework;finite impulse response filter;FIR filter;CD channel compensation;optical communication;Equalizers;Finite impulse response filters;Delays;Optimization;Bit error rate;Optical noise;Signal to noise ratio},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924013.pdf},\n}\n\n
\n
\n\n\n
\n We propose a new optimization framework to compensate chromatic dispersion by complex-valued infinite impulse response (IIR) all-pass filter. The design of the IIR all-pass filter is based on minimizing the mean square error (MSE) in group-delay and phase cost functions. The necessary conditions are derived and incorporated into a multi-step optimization framework to ensure the stability of the resulting IIR filter. It is shown that IIR filter achieves similar or slightly better performance compared to its finite impulse response (FIR) counterpart. Moreover, IIR filtering requires significantly less number of taps to compensate the same CD channel compared to FIR filtering.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A control theoretic approach to solve a constrained uplink power dynamic game.\n \n \n \n \n\n\n \n Zazo, S.; Zazo, J.; and Sánchez-Fernández, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 401-405, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952099,\n  author = {S. Zazo and J. Zazo and M. Sánchez-Fernández},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A control theoretic approach to solve a constrained uplink power dynamic game},\n  year = {2014},\n  pages = {401-405},\n  abstract = {This paper addresses an uplink power control dynamic game where we assume that each user battery represents the system state that changes with time following a discrete-time version of a differential game. To overcome the complexity of the analysis of a dynamic game approach we focus on the concept of Dynamic Potential Games showing that the game can be solved as an equivalent Multivariate Optimum Control Problem. The solution of this problem is quite interesting because different users split the activity in time, avoiding higher interferences and providing a long term fairness.},\n  keywords = {differential games;interference;constrained uplink power control dynamic game approach;discrete-time version;multivariate optimum control problem;Games;Batteries;Equations;Mathematical model;Uplink;Power control;Economics},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925163.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses an uplink power control dynamic game where we assume that each user battery represents the system state that changes with time following a discrete-time version of a differential game. To overcome the complexity of the analysis of a dynamic game approach we focus on the concept of Dynamic Potential Games showing that the game can be solved as an equivalent Multivariate Optimum Control Problem. The solution of this problem is quite interesting because different users split the activity in time, avoiding higher interferences and providing a long term fairness.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Noise filtering in bandlimited digital chaos-based communication systems.\n \n \n \n \n\n\n \n Fontes, R. T.; and Eisencraft, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 406-410, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"NoisePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952100,\n  author = {R. T. Fontes and M. Eisencraft},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Noise filtering in bandlimited digital chaos-based communication systems},\n  year = {2014},\n  pages = {406-410},\n  abstract = {In recent years, many chaos-based communication schemes were proposed. However, their performance in non-ideal scenarios must be further investigated. In this work, the performance of a bandlimited binary communication system based on chaotic synchronization is evaluated considering a white Gaussian noise channel. As a way to improve the signal to noise ratio in the receiver, and thus the bit error rate, we propose to filter the out-of-band noise in the receiver. Numerical simulations show the advantages of using such a scheme.},\n  keywords = {AWGN channels;chaotic communication;error statistics;filtering theory;synchronisation;noise filtering;out-of-band noise;bit error rate;signal to noise ratio;white Gaussian noise channel;chaotic synchronization;bandlimited binary communication system;chaos-based communication schemes;Chaotic communication;Bit error rate;Tuning;Noise;Communication systems;Receivers;Chaos-based communication;bandlimited channels;additive noise;synchronization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925483.pdf},\n}\n\n
\n
\n\n\n
\n In recent years, many chaos-based communication schemes were proposed. However, their performance in non-ideal scenarios must be further investigated. In this work, the performance of a bandlimited binary communication system based on chaotic synchronization is evaluated considering a white Gaussian noise channel. As a way to improve the signal to noise ratio in the receiver, and thus the bit error rate, we propose to filter the out-of-band noise in the receiver. Numerical simulations show the advantages of using such a scheme.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A simply-differential low-complexity primary synchronization scheme for 3GPP LTE systems.\n \n \n \n \n\n\n \n Nasraoui, L.; Najjar Atallah, L.; and Siala, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 411-415, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952101,\n  author = {L. Nasraoui and L. {Najjar Atallah} and M. Siala},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A simply-differential low-complexity primary synchronization scheme for 3GPP LTE systems},\n  year = {2014},\n  pages = {411-415},\n  abstract = {In this paper, downlink primary synchronization for LTE systems is investigated, including time synchronization and sector identification. The proposed scheme exploits the Primary Synchronization Signal which is generated from known Zadoff-Chu sequences. Unlike the conventional schemes, in which time synchronization is first processed then the demodulated OFDM symbols are cross-correlated with the known Zadoff-Chu sequences for sector identification, the proposed scheme simultaneously achieves both tasks. To this aim, the received signal is differentially auto-correlated and compensated with a frequency offset whose value depends on the used Zadoff-Chu sequence. The same metric allows detecting both the symbol timing and the sector identifier. Simulation results, carried in additive white Gaussian noise and Rayleigh multipath channels, show the efficiency and reliability of the proposed primary synchronization scheme. We note that, compared to former methods, the proposed one not only leads to performance enhancement but also realizes a considerable complexity reduction.},\n  keywords = {3G mobile communication;AWGN;Long Term Evolution;multipath channels;OFDM modulation;Rayleigh channels;synchronisation;3GPP LTE systems;simply-differential low-complexity primary synchronization;downlink primary synchronization;time synchronization;sector identification;primary synchronization signal;Zadoff-Chu sequences;demodulated OFDM symbols;cross-correlation;symbol timing;additive white Gaussian noise;Rayleigh multipath channels;Synchronization;OFDM;Correlation;Multipath channels;Robustness;Benchmark testing;3GPP LTE;OFDM;time synchronization;sector search;Zadoff-Chu sequences},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925485.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, downlink primary synchronization for LTE systems is investigated, including time synchronization and sector identification. The proposed scheme exploits the Primary Synchronization Signal which is generated from known Zadoff-Chu sequences. Unlike the conventional schemes, in which time synchronization is first processed then the demodulated OFDM symbols are cross-correlated with the known Zadoff-Chu sequences for sector identification, the proposed scheme simultaneously achieves both tasks. To this aim, the received signal is differentially auto-correlated and compensated with a frequency offset whose value depends on the used Zadoff-Chu sequence. The same metric allows detecting both the symbol timing and the sector identifier. Simulation results, carried in additive white Gaussian noise and Rayleigh multipath channels, show the efficiency and reliability of the proposed primary synchronization scheme. We note that, compared to former methods, the proposed one not only leads to performance enhancement but also realizes a considerable complexity reduction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimal energy allocation scheme for throughput enhancement in cooperative cognitive network.\n \n \n \n \n\n\n \n Sahnoun, I.; Kammoun, I.; and Siala, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 416-420, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OptimalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952102,\n  author = {I. Sahnoun and I. Kammoun and M. Siala},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Optimal energy allocation scheme for throughput enhancement in cooperative cognitive network},\n  year = {2014},\n  pages = {416-420},\n  abstract = {In this paper, a cognitive radio scenario is proposed, where secondary users are allowed to communicate concurrently with primary users provided that they do not create harmful interference to the licensed users. Here, we aim to improve the throughput of unlicensed system. For this aim, we propose a selective relay cooperative scheme to assist the secondary transmission. Moreover, an adaptive modulation is used in order to compensate the throughput loss due to the relaying. The main contribution of this work is to combine a selection scheme where only one “best” relay is chosen with an energy allocation scheme for source and relay nodes to maximize the achievable throughput under the system constraints. A variety of simulation results reveals that our proposed energy allocation method combined with adaptive modulation offers better performance compared with the classical cooperation scheme where energy resources are equally distributed over all nodes.},\n  keywords = {cognitive radio;cooperative communication;relay networks (telecommunication);telecommunication power management;energy resources;classical cooperation scheme;adaptive modulation;selective relay cooperative scheme;unlicensed system;primary users;secondary users;cooperative cognitive network;throughput enhancement;optimal energy allocation scheme;Relays;Throughput;Resource management;Optimization;Interference;Signal to noise ratio;Base stations;Cognitive network;cooperation;adaptive modulation;interference cost constraint;Amplify and forward;energy optimization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926507.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a cognitive radio scenario is proposed, where secondary users are allowed to communicate concurrently with primary users provided that they do not create harmful interference to the licensed users. Here, we aim to improve the throughput of unlicensed system. For this aim, we propose a selective relay cooperative scheme to assist the secondary transmission. Moreover, an adaptive modulation is used in order to compensate the throughput loss due to the relaying. The main contribution of this work is to combine a selection scheme where only one “best” relay is chosen with an energy allocation scheme for source and relay nodes to maximize the achievable throughput under the system constraints. A variety of simulation results reveals that our proposed energy allocation method combined with adaptive modulation offers better performance compared with the classical cooperation scheme where energy resources are equally distributed over all nodes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-radio network optimisation using Bayesian belief propagation.\n \n \n \n \n\n\n \n McGuire, C.; and Weiss, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 421-425, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-radioPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952103,\n  author = {C. McGuire and S. Weiss},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-radio network optimisation using Bayesian belief propagation},\n  year = {2014},\n  pages = {421-425},\n  abstract = {In this paper we show how 5 GHz and “TV White Space” wireless networks can be combined to provide fixed access for a rural community. Using multiple technologies allows the advantages of each to be combined to overcome individual limitations when assigning stations between networks. Specifically, we want to maximise throughput under the constraint of satisfying both the desired individual station data rate and the transmit power within regulatory limits. For this optimisation, we employ Pearl's algorithm, a Bayesian belief propagation implementation, which is informed by statistics drawn from network trials on Isle of Tiree with 100 households. The method confirms results obtained with an earlier deterministic approach.},\n  keywords = {Bayes methods;belief networks;optimisation;radio networks;multiradio network optimisation;Bayesian belief propagation implementaion;TV white space wireless network;rural community;throughput maximisation;Pearl algorithm;deterministic approach;Isle of Tiree;frequency 5 GHz;Radio access networks;Bandwidth;Throughput;Power demand;Base stations;Propagation losses;Optimization;heterogeneous networks;network optimisation;Bayesian belief propagation;white space communications;rural broadband access},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569927209.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we show how 5 GHz and “TV White Space” wireless networks can be combined to provide fixed access for a rural community. Using multiple technologies allows the advantages of each to be combined to overcome individual limitations when assigning stations between networks. Specifically, we want to maximise throughput under the constraint of satisfying both the desired individual station data rate and the transmit power within regulatory limits. For this optimisation, we employ Pearl's algorithm, a Bayesian belief propagation implementation, which is informed by statistics drawn from network trials on Isle of Tiree with 100 households. The method confirms results obtained with an earlier deterministic approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Characterizing changes in the noise statistics of GNSS space clocks with the dynamic Allan variance.\n \n \n \n \n\n\n \n Galleani, c.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 426-430, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CharacterizingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952104,\n  author = {c. Galleani},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Characterizing changes in the noise statistics of GNSS space clocks with the dynamic Allan variance},\n  year = {2014},\n  pages = {426-430},\n  abstract = {The dynamic Allan variance (DAVAR) is a tool for the characterization of precise clocks. Monitoring anomalies of precise clocks is essential, especially when they are employed onboard the satellites of a global navigation satellite system (GNSS). When an anomaly occurs, the DAVAR changes with time, its shape depending on the type of anomaly occurred. We obtain the analytic DAVAR for a change of variance in the clock noise, an anomaly with critical effects on the clock performances. This result is helpful when the clock health is monitored by observing the DAVAR.},\n  keywords = {clocks;satellite navigation;noise statistics;GNSS space clocks;dynamic Allan variance;DAVAR;global navigation satellite system;Noise;Global Positioning System;Satellites;Time-frequency analysis;Atomic clocks;Dynamic Allan variance;GNSS clocks;clock noise;clock anomaly;change of variance},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924103.pdf},\n}\n\n
\n
\n\n\n
\n The dynamic Allan variance (DAVAR) is a tool for the characterization of precise clocks. Monitoring anomalies of precise clocks is essential, especially when they are employed onboard the satellites of a global navigation satellite system (GNSS). When an anomaly occurs, the DAVAR changes with time, its shape depending on the type of anomaly occurred. We obtain the analytic DAVAR for a change of variance in the clock noise, an anomaly with critical effects on the clock performances. This result is helpful when the clock health is monitored by observing the DAVAR.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic optimization of adaptive notch filter's frequency tracking.\n \n \n \n \n\n\n \n Meller, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 431-435, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952105,\n  author = {M. Meller},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic optimization of adaptive notch filter's frequency tracking},\n  year = {2014},\n  pages = {431-435},\n  abstract = {Estimation of instantaneous frequency of narrowband complex sinusoids is often performed using lightweight algorithms called adaptive notch filters. However, to reach high performance, these algorithms require careful tuning. The paper proposes a novel self-tuning layer for a recently introduced adaptive notch filtering algorithm. Analysis shows that, under Gaussian random-walk type assumptions, the resulting solution converges in mean to the optimal frequency estimator. A simplified one degree of freedom version of the filter, recommended for practical applications, is also proposed. Finally, a comparison of performance with six other state of the art schemes is performed. It confirms the improved tracking accuracy of the proposed scheme.},\n  keywords = {adaptive filters;adaptive signal processing;frequency estimation;Gaussian processes;notch filters;random processes;adaptive notch filter frequency tracking automatic optimization;instantaneous frequency estimation;narrowband complex sinusoids;self-tuning layer;Gaussian random-walk type assumptions;optimal frequency estimator;degree of freedom;Signal processing algorithms;Algorithm design and analysis;Frequency estimation;Tuning;Accuracy;Noise measurement;adaptive notch filtering;adaptive signal processing;frequency tracking},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569905539.pdf},\n}\n\n
\n
\n\n\n
\n Estimation of instantaneous frequency of narrowband complex sinusoids is often performed using lightweight algorithms called adaptive notch filters. However, to reach high performance, these algorithms require careful tuning. The paper proposes a novel self-tuning layer for a recently introduced adaptive notch filtering algorithm. Analysis shows that, under Gaussian random-walk type assumptions, the resulting solution converges in mean to the optimal frequency estimator. A simplified one degree of freedom version of the filter, recommended for practical applications, is also proposed. Finally, a comparison of performance with six other state of the art schemes is performed. It confirms the improved tracking accuracy of the proposed scheme.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An approach to nonlinear state estimation using extended FIR filtering.\n \n \n \n \n\n\n \n Zhao, S.; Pomárico-Franquiz, J.; and Shmaliy, Y. S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 436-440, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952106,\n  author = {S. Zhao and J. Pomárico-Franquiz and Y. S. Shmaliy},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {An approach to nonlinear state estimation using extended FIR filtering},\n  year = {2014},\n  pages = {436-440},\n  abstract = {A new technique called extended finite impulse response (EFIR) filtering is developed to nonlinear state estimation in discrete time state space. The EFIR filter belongs to a family of unbiased FIR filters which completely ignore the noise statistics. An optimal averaging horizon of Nopt points required by the EFIR filter can be determined via measurements with much smaller efforts and cost than for the noise statistics. These properties of EFIR filtering are distinctive advantages against the extended Kalman filter (EKF). A payment for this is an Nopt - 1 times longer operation which, however, can be reduced to that of the EKF by using parallel computing. Based on extensive simulations of diverse nonlinear models, we show that EFIR filtering is more successful in accuracy and more robust than EKF under the unknown noise statistics and model uncertainties.},\n  keywords = {FIR filters;Kalman filters;nonlinear estimation;nonlinear filters;state estimation;nonlinear state estimation;extended FIR filtering;extended finite impulse response filtering;discrete time state space;optimal averaging horizon;unbiased FIR filters;parallel computing;unknown noise statistics;diverse nonlinear models;model uncertainties;extended Kalman filter;Noise;Hidden Markov models;Kalman filters;Noise measurement;Estimation error;Vectors;State-space methods},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569909755.pdf},\n}\n\n
\n
\n\n\n
\n A new technique called extended finite impulse response (EFIR) filtering is developed to nonlinear state estimation in discrete time state space. The EFIR filter belongs to a family of unbiased FIR filters which completely ignore the noise statistics. An optimal averaging horizon of Nopt points required by the EFIR filter can be determined via measurements with much smaller efforts and cost than for the noise statistics. These properties of EFIR filtering are distinctive advantages against the extended Kalman filter (EKF). A payment for this is an Nopt - 1 times longer operation which, however, can be reduced to that of the EKF by using parallel computing. Based on extensive simulations of diverse nonlinear models, we show that EFIR filtering is more successful in accuracy and more robust than EKF under the unknown noise statistics and model uncertainties.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Minimum variance unbiased FIR state estimation of discrete time-variant models.\n \n \n \n \n\n\n \n Zhao, S.; Shmaliy, Y. S.; and Liu, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 441-445, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"MinimumPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952107,\n  author = {S. Zhao and Y. S. Shmaliy and F. Liu},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Minimum variance unbiased FIR state estimation of discrete time-variant models},\n  year = {2014},\n  pages = {441-445},\n  abstract = {State estimation and tracking often require optimal or unbiased estimators. In this paper, we propose a new minimum variance unbiased (MVU) finite impulse response (FIR) filter which minimizes the estimation error variance in the unbiased FIR (UFIR) filter. The relationship between the filter gains of the MVU FIR, UFIR and optimal FIR (OFIR) filters is found analytically. Simulations provided using a polynomial state-space model have shown that errors in the MVU FIR filter are intermediate between the UFIR and OFIR filters, and the MVU FIR filter exhibits better denoisng effect than the UFIR estimates. It is also shown that the performance of MVU FIR filter strongly depends on the averaging interval of N points: by small N, the MVU FIR filter approaches UFIR filter and, if N is large, it becomes optimal.},\n  keywords = {FIR filters;signal denoising;minimum variance unbiased FIR state estimation;discrete time-variant models;MVU finite impulse response filter;polynomial state-space model;MVU FIR filter;signal denoising;Finite impulse response filters;Noise;State-space methods;State estimation;Vectors;Estimation error},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569912073.pdf},\n}\n\n
\n
\n\n\n
\n State estimation and tracking often require optimal or unbiased estimators. In this paper, we propose a new minimum variance unbiased (MVU) finite impulse response (FIR) filter which minimizes the estimation error variance in the unbiased FIR (UFIR) filter. The relationship between the filter gains of the MVU FIR, UFIR and optimal FIR (OFIR) filters is found analytically. Simulations provided using a polynomial state-space model have shown that errors in the MVU FIR filter are intermediate between the UFIR and OFIR filters, and the MVU FIR filter exhibits better denoisng effect than the UFIR estimates. It is also shown that the performance of MVU FIR filter strongly depends on the averaging interval of N points: by small N, the MVU FIR filter approaches UFIR filter and, if N is large, it becomes optimal.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online instantaneous frequency estimation utilizing empirical mode decomposition and hermite splines.\n \n \n \n \n\n\n \n Holzinger, F. R.; and Benedikt, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 446-450, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OnlinePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952108,\n  author = {F. R. Holzinger and M. Benedikt},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Online instantaneous frequency estimation utilizing empirical mode decomposition and hermite splines},\n  year = {2014},\n  pages = {446-450},\n  abstract = {Most of the available frequency estimation methods are restricted for the application to signals without a bias or a carrier signal. For offline analysis of a signal an existing carrier may be determined and eliminated. However, in the case of online computations the carrier is not accurately known a priori in general. An error in the carrier signal directly affects the accuracy of the subsequent instantaneous frequency estimation approach. This article focusses on the online instantaneous frequency estimation of non-stationary signals based on the empirical mode decomposition scheme. Especially Hermite spline interpolation of samples for empirical mode decomposition is addressed. Hermite splines enables the definition of enhanced boundary conditions and leads to an effective online instantaneous frequency estimation approach. Throughout the article algorithmic details are examined by a theoretical example.},\n  keywords = {interpolation;signal processing;splines (mathematics);frequency estimation methods;signal application;carrier signal;offline analysis;online computations;empirical mode decomposition scheme;hermite spline interpolation;online instantaneous frequency estimation approach;Splines (mathematics);Frequency estimation;Boundary conditions;Interpolation;Time-frequency analysis;Estimation;Market research;EMD;online;spline interpolation;boundary condition},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569913247.pdf},\n}\n\n
\n
\n\n\n
\n Most of the available frequency estimation methods are restricted for the application to signals without a bias or a carrier signal. For offline analysis of a signal an existing carrier may be determined and eliminated. However, in the case of online computations the carrier is not accurately known a priori in general. An error in the carrier signal directly affects the accuracy of the subsequent instantaneous frequency estimation approach. This article focusses on the online instantaneous frequency estimation of non-stationary signals based on the empirical mode decomposition scheme. Especially Hermite spline interpolation of samples for empirical mode decomposition is addressed. Hermite splines enables the definition of enhanced boundary conditions and leads to an effective online instantaneous frequency estimation approach. Throughout the article algorithmic details are examined by a theoretical example.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Recovery of correlated sparse signals from under-sampled measurements.\n \n \n \n \n\n\n \n Chen, Z.; Molina, R.; and Katsaggelos, A. K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 451-455, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RecoveryPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952109,\n  author = {Z. Chen and R. Molina and A. K. Katsaggelos},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Recovery of correlated sparse signals from under-sampled measurements},\n  year = {2014},\n  pages = {451-455},\n  abstract = {In this paper we consider the problem of recovering temporally smooth or correlated sparse signals from a set of undersampled measurements. We propose two algorithmic solutions that exploit the signal temporal properties to improve the reconstruction accuracy. The effectiveness of the proposed algorithms is corroborated with experimental results.},\n  keywords = {compressed sensing;signal reconstruction;correlated sparse signal recovery;under-sampled measurements;signal temporal properties;reconstruction accuracy;Greedy algorithms;Bayes methods;Signal to noise ratio;Noise measurement;Cost function;Image reconstruction;Sparse signal recovery;multiple measurement;greedy algorithm;convex relaxation method},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923577.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we consider the problem of recovering temporally smooth or correlated sparse signals from a set of undersampled measurements. We propose two algorithmic solutions that exploit the signal temporal properties to improve the reconstruction accuracy. The effectiveness of the proposed algorithms is corroborated with experimental results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Iterative approach to estimate the parameters of a TVAR process corrupted by a MA noise.\n \n \n \n \n\n\n \n Ijima, H.; Diversi, R.; and Grivel, E.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 456-460, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"IterativePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952110,\n  author = {H. Ijima and R. Diversi and E. Grivel},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Iterative approach to estimate the parameters of a TVAR process corrupted by a MA noise},\n  year = {2014},\n  pages = {456-460},\n  abstract = {A great deal of interest has been paid to the time-varying autoregressive (TVAR) parameter tracking, but few papers deal with this issue when noisy observations are available. Recently, this problem was addressed for a TVAR process disturbed by an additive zero-mean white noise, by using deterministic regression methods. In this paper, we focus our attention on the case of an additive colored measurement noise modeled by a moving average process. More particularly, we propose to estimate the TVAR parameters by using a variant of the improved least-squares (ILS) methods, initially introduced by Zheng to estimate the AR parameters from a signal embedded in a white noise. Simulation studies illustrate the advantages and the limits of the approach.},\n  keywords = {autoregressive processes;iterative methods;least squares approximations;moving average processes;signal processing;AR parameters;improved ILS methods;improved least-squares methods;moving average process;additive colored measurement noise;deterministic regression methods;additive zero-mean white noise;time-varying autoregressive parameter tracking;MA noise;TVAR process;iterative approach;Noise;Abstracts;Indexes;Noise measurement;Kalman filters;Time-varying autoregressive model;unbiased parameter estimation;colored noise;moving average process;deterministic regression approach},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923795.pdf},\n}\n\n
\n
\n\n\n
\n A great deal of interest has been paid to the time-varying autoregressive (TVAR) parameter tracking, but few papers deal with this issue when noisy observations are available. Recently, this problem was addressed for a TVAR process disturbed by an additive zero-mean white noise, by using deterministic regression methods. In this paper, we focus our attention on the case of an additive colored measurement noise modeled by a moving average process. More particularly, we propose to estimate the TVAR parameters by using a variant of the improved least-squares (ILS) methods, initially introduced by Zheng to estimate the AR parameters from a signal embedded in a white noise. Simulation studies illustrate the advantages and the limits of the approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Sparse blind deconvolution based on scale invariant smoothed ℓ0-norm.\n \n \n \n\n\n \n Nose-Filho, K.; Jutten, C.; and Romano, J. M. T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 461-465, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952111,\n  author = {K. Nose-Filho and C. Jutten and J. M. T. Romano},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse blind deconvolution based on scale invariant smoothed ℓ0-norm},\n  year = {2014},\n  pages = {461-465},\n  abstract = {In this work, we explore the problem of blind deconvolution in the context of sparse signals. We show that the ℓ0-norm works as a contrast function, if the length of the impulse response of the system is smaller than the shortest distance between two spikes of the input signal. Demonstrating this sufficient condition is our basic theoretical result. However, one of the problems of dealing with the ℓ0-norm in optimization problems is the requirement of exhaustive or combinatorial search methods, since it is a non continuous function. In order to propose an alternative for that, Mohimani et al. (2009) proposed a smoothed and continuous version of the ℓ0-norm. Here, we propose a modification of this criterion in order to make it scale-invariant and, finally, we derive a gradient-based algorithm for the modified criterion. Results with synthetic data suggests that the imposed conditions are sufficient but not strictly necessary.},\n  keywords = {blind source separation;combinatorial mathematics;deconvolution;gradient methods;scale invariant smoothed ℓ0-norm;synthetic data;gradient-based algorithm;noncontinuous function;combinatorial search methods;impulse response;sparse signals;sparse blind deconvolution;Deconvolution;Convolution;Correlation;Signal to noise ratio;Entropy;Vectors;AWGN;Blind Deconvolution;Smoothed ℓ0-norm;Sparse Signals},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n In this work, we explore the problem of blind deconvolution in the context of sparse signals. We show that the ℓ0-norm works as a contrast function, if the length of the impulse response of the system is smaller than the shortest distance between two spikes of the input signal. Demonstrating this sufficient condition is our basic theoretical result. However, one of the problems of dealing with the ℓ0-norm in optimization problems is the requirement of exhaustive or combinatorial search methods, since it is a non continuous function. In order to propose an alternative for that, Mohimani et al. (2009) proposed a smoothed and continuous version of the ℓ0-norm. Here, we propose a modification of this criterion in order to make it scale-invariant and, finally, we derive a gradient-based algorithm for the modified criterion. Results with synthetic data suggests that the imposed conditions are sufficient but not strictly necessary.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Performances theoretical model-based optimization for incipient fault detection with KL Divergence.\n \n \n \n \n\n\n \n Youssef, A.; Delpha, C.; and Diallo, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 466-470, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PerformancesPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952112,\n  author = {A. Youssef and C. Delpha and D. Diallo},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Performances theoretical model-based optimization for incipient fault detection with KL Divergence},\n  year = {2014},\n  pages = {466-470},\n  abstract = {Sensible and reliable incipient fault detection methods are major concerns in industrial processes. The Kullback Leibler Divergence (KLD) has proven to be particularly efficient. However, the performance of the technique is highly dependent on the detection threshold and the Signal to Noise Ratio (SNR). In this paper, we develop an analytical model of the fault detection performances (False Alarm Probability and Miss Detection Probability) based on the KLD including the noisy environment characteristics. Thanks to this model, an optimization procedure is applied to set the optimal fault detection threshold depending on the SNR and the fault severity.},\n  keywords = {fault diagnosis;optimisation;principal component analysis;signal detection;optimization;performance modeling;miss detection probability;false alarm probability;detection threshold;Kullback Leibler Divergence;KL divergence;incipient fault detection;Fault detection;Monitoring;Cost function;Signal to noise ratio;Noise measurement;Fault detection;performance modeling;Optimization;Kullback-Leibler Divergence;Principal Component Analysis},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924767.pdf},\n}\n\n
\n
\n\n\n
\n Sensible and reliable incipient fault detection methods are major concerns in industrial processes. The Kullback Leibler Divergence (KLD) has proven to be particularly efficient. However, the performance of the technique is highly dependent on the detection threshold and the Signal to Noise Ratio (SNR). In this paper, we develop an analytical model of the fault detection performances (False Alarm Probability and Miss Detection Probability) based on the KLD including the noisy environment characteristics. Thanks to this model, an optimization procedure is applied to set the optimal fault detection threshold depending on the SNR and the fault severity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Instantaneous frequency estimation by group delay attractors and instantaneous frequency attractors.\n \n \n \n \n\n\n \n Pei, S.; and Huang, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 471-475, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"InstantaneousPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952133,\n  author = {S. Pei and S. Huang},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Instantaneous frequency estimation by group delay attractors and instantaneous frequency attractors},\n  year = {2014},\n  pages = {471-475},\n  abstract = {Instantaneous frequency attractors (IFAs), obtained from the phase of a time-frequency representation, have been introduced for instantaneous frequency (IF) estimation. In this paper, another kind of attractors called group delay attractors (GDAs) are proposed to improve the IFA-based method. The GDAs can reveal IFs which cannot be estimated from the IFAs. Simulation results show that the IF estimation method based on both the GDAs and IFAs outperforms the well-known estimation method, i.e. ridge detection. Also, it is shown that the proposed method creates much less spurious IFs than the IFA-based method in noisy environments.},\n  keywords = {frequency estimation;signal representation;time-frequency analysis;instantaneous frequency estimation;group delay attractors;instantaneous frequency attractors;time-frequency representation;IF estimation;GDAs;IF estimation method;ridge detection;noisy environments;Time-frequency analysis;Estimation;Chirp;Frequency estimation;Delays;Noise measurement;Kernel;Instantaneous frequency estimation;local group delay;local instantaneous frequency;time-frequency representation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924975.pdf},\n}\n\n
\n
\n\n\n
\n Instantaneous frequency attractors (IFAs), obtained from the phase of a time-frequency representation, have been introduced for instantaneous frequency (IF) estimation. In this paper, another kind of attractors called group delay attractors (GDAs) are proposed to improve the IFA-based method. The GDAs can reveal IFs which cannot be estimated from the IFAs. Simulation results show that the IF estimation method based on both the GDAs and IFAs outperforms the well-known estimation method, i.e. ridge detection. Also, it is shown that the proposed method creates much less spurious IFs than the IFA-based method in noisy environments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A new approach to spectral estimation from irregular sampling.\n \n \n \n \n\n\n \n Bonacci, D.; and Lacaze, B.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 476-480, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952134,\n  author = {D. Bonacci and B. Lacaze},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A new approach to spectral estimation from irregular sampling},\n  year = {2014},\n  pages = {476-480},\n  abstract = {This article addresses the problem of signal reconstruction, spectral estimation and linear filtering directly from irregularly-spaced samples of a continuous signal (or autocorrelation function in the case of random signals) when signal spectrum is assumed to be bounded. The number 2L of samples is assumed to be large enough so that the variation of the spectrum on intervals of width π/L is small. Reconstruction formulas are based on PNS (Periodic Nonuniform Sampling) schemes. They allow for reconstruction schemes not requiring regular resampling and suppress two stages in classical computations. The presented method can also be easily generalized to spectra in symmetric frequency bands (bandpass signals).},\n  keywords = {correlation methods;estimation theory;filtering theory;signal reconstruction;signal sampling;spectral estimation;irregular sampling;signal reconstruction problem;linear filtering;autocorrelation function;signal spectrum;PNS schemes;periodic nonuniform sampling schemes;symmetric frequency bands;bandpass signals;Fourier transforms;Mirrors;Accuracy;Estimation;Interpolation;Nonuniform sampling;Signal processing;Periodic Nonuniform Sampling;Sampling theory;Signal reconstruction;Nonuniform filtering;Analytic signal},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924989.pdf},\n}\n\n
\n
\n\n\n
\n This article addresses the problem of signal reconstruction, spectral estimation and linear filtering directly from irregularly-spaced samples of a continuous signal (or autocorrelation function in the case of random signals) when signal spectrum is assumed to be bounded. The number 2L of samples is assumed to be large enough so that the variation of the spectrum on intervals of width π/L is small. Reconstruction formulas are based on PNS (Periodic Nonuniform Sampling) schemes. They allow for reconstruction schemes not requiring regular resampling and suppress two stages in classical computations. The presented method can also be easily generalized to spectra in symmetric frequency bands (bandpass signals).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Uncovering harmonic content via skewness maximization - a Fourier analysis.\n \n \n \n \n\n\n \n Ovacıklı, A. K.; Pääjärvi, P.; LeBlanc, J. P.; and Carlson, J. E.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 481-485, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"UncoveringPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952135,\n  author = {A. K. Ovacıklı and P. Pääjärvi and J. P. LeBlanc and J. E. Carlson},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Uncovering harmonic content via skewness maximization - a Fourier analysis},\n  year = {2014},\n  pages = {481-485},\n  abstract = {Blind adaptation with appropriate objective function results in enhancement of signal of interest. Skewness is chosen as a measure of impulsiveness for blind adaptation to enhance impacting sources arising from defective rolling bearings. Such impacting sources can be modelled with harmonically related sinusoids which leads to discovery of harmonic content with unknown fundamental frequency by skewness maximization. Interfering components that do not possess harmonic relation are simultaneously suppressed with proposed method. An experimental example on rolling bearing fault detection is given to illustrate the ability of skewness maximization in uncovering harmonic content.},\n  keywords = {adaptive filters;Fourier analysis;harmonic analysis;signal processing;adaptive filters;defective rolling bearings;fundamental frequency;blind adaptation;Fourier analysis;skewness maximization;harmonic content;Harmonic analysis;Rolling bearings;Power system harmonics;Vibrations;Noise;Deconvolution;Adaptive filters;harmonic analysis;higher order statistics;rolling element bearings},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925133.pdf},\n}\n\n
\n
\n\n\n
\n Blind adaptation with appropriate objective function results in enhancement of signal of interest. Skewness is chosen as a measure of impulsiveness for blind adaptation to enhance impacting sources arising from defective rolling bearings. Such impacting sources can be modelled with harmonically related sinusoids which leads to discovery of harmonic content with unknown fundamental frequency by skewness maximization. Interfering components that do not possess harmonic relation are simultaneously suppressed with proposed method. An experimental example on rolling bearing fault detection is given to illustrate the ability of skewness maximization in uncovering harmonic content.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evaluation of non-linear combinations of rescaled reassigned spectrograms.\n \n \n \n \n\n\n \n Hansson-Sandsten, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 486-490, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EvaluationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952136,\n  author = {M. Hansson-Sandsten},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Evaluation of non-linear combinations of rescaled reassigned spectrograms},\n  year = {2014},\n  pages = {486-490},\n  abstract = {The reassignment technique is used to increase the localization for signals that have closely located time-frequency components. For Gaussian components the reassignment based on an optimal (matched) window spectrogram will result in a single point where all mass is located. For non-optimal windows, the reassignment procedure can be optimally rescaled to fulfill the single point mass location. Non-linear combinations of spectrograms for different window lengths have previously been suggested, [1], and in this paper an evaluation is made of the performance for different non-linear combinations of optimally rescaled reassigned spectrograms.},\n  keywords = {signal processing;optimally rescaled reassigned spectrograms;reassignment procedure;optimal window spectrogram;Gaussian components;signals localization;reassignment technique;rescaled reassigned spectrograms;nonlinear combinations evaluation;Spectrogram;Entropy;Time-frequency analysis;Standards;Signal to noise ratio;time-frequency;reassignment;localization;Hermite function},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925281.pdf},\n}\n\n
\n
\n\n\n
\n The reassignment technique is used to increase the localization for signals that have closely located time-frequency components. For Gaussian components the reassignment based on an optimal (matched) window spectrogram will result in a single point where all mass is located. For non-optimal windows, the reassignment procedure can be optimally rescaled to fulfill the single point mass location. Non-linear combinations of spectrograms for different window lengths have previously been suggested, [1], and in this paper an evaluation is made of the performance for different non-linear combinations of optimally rescaled reassigned spectrograms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n High resolution sparse estimation of exponentially decaying two-dimensional signals.\n \n \n \n \n\n\n \n Adalbjörnsson, S. I.; Swärd, J.; and Jakobsson, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 491-495, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"HighPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952137,\n  author = {S. I. Adalbjörnsson and J. Swärd and A. Jakobsson},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {High resolution sparse estimation of exponentially decaying two-dimensional signals},\n  year = {2014},\n  pages = {491-495},\n  abstract = {In this work, we consider the problem of high-resolution estimation of the parameters detailing a two-dimensional (2-D) signal consisting of an unknown number of exponentially decaying sinusoidal components. Interpreting the estimation problem as a block (or group) sparse representation problem allows the decoupling of the 2-D data structure into a sum of outer-products of 1-D damped sinusoidal signals with unknown damping and frequency. The resulting non-zero blocks will represent each of the 1-D damped sinusoids, which may then be used as non-parametric estimates of the corresponding 1-D signals; this implies that the sought 2-D modes may be estimated using a sequence of 1-D optimization problems. The resulting sparse representation problem is solved using an iterative ADMM-based algorithm, after which the damping and frequency parameter can be estimated by a sequence of simple 1-D optimization problems.},\n  keywords = {optimisation;parameter estimation;signal representation;signal resolution;high resolution sparse estimation;exponentially decaying two-dimensional signals;2D signal;block sparse representation problem;outer-products;1D damped sinusoidal signals;unknown damping;nonzero blocks;nonparametric estimates;1D optimization problems;iterative ADMM-based algorithm;frequency parameter;2D data structure decoupling;parameter estimation;Damping;Estimation;Frequency estimation;Signal to noise ratio;Nuclear magnetic resonance;Minimization;Dictionaries;Sparse signal modeling;Spectral analysis;Sparse reconstruction;Parameter estimation;ADMM},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925345.pdf},\n}\n\n
\n
\n\n\n
\n In this work, we consider the problem of high-resolution estimation of the parameters detailing a two-dimensional (2-D) signal consisting of an unknown number of exponentially decaying sinusoidal components. Interpreting the estimation problem as a block (or group) sparse representation problem allows the decoupling of the 2-D data structure into a sum of outer-products of 1-D damped sinusoidal signals with unknown damping and frequency. The resulting non-zero blocks will represent each of the 1-D damped sinusoids, which may then be used as non-parametric estimates of the corresponding 1-D signals; this implies that the sought 2-D modes may be estimated using a sequence of 1-D optimization problems. The resulting sparse representation problem is solved using an iterative ADMM-based algorithm, after which the damping and frequency parameter can be estimated by a sequence of simple 1-D optimization problems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Smooth 2-D frequency estimation using covariance fitting.\n \n \n \n \n\n\n \n Swärd, J.; Brynolfsson, J.; Jakobsson, A.; and Hansson-Sandsten, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 496-500, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SmoothPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952138,\n  author = {J. Swärd and J. Brynolfsson and A. Jakobsson and M. Hansson-Sandsten},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Smooth 2-D frequency estimation using covariance fitting},\n  year = {2014},\n  pages = {496-500},\n  abstract = {In this paper, we introduce a non-parametric 2-D spectral estimator for smooth spectra, allowing for irregularly sampled measurements. The estimate is formed by assuming that the spectrum is smooth and will vary slowly over the frequency grids, such that the spectral density inside any given rectangle in the spectral grid may be approximated well as a plane. Using this framework, the 2-D spectrum is estimated by finding the solution to a convex covariance fitting problem, which has an analytic solution. Numerical simulations indicate the achievable performance gain as compared to the Blackman-Tukey estimator.},\n  keywords = {frequency estimation;signal sampling;smooth 2D frequency estimation;convex covariance fitting problem;numerical simulations;irregular sampling;Transforms;Frequency estimation;Estimation;Data models;Cats;Measurement uncertainty;2-D frequency estimation;smooth spectrum;irregular sampling},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925615.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we introduce a non-parametric 2-D spectral estimator for smooth spectra, allowing for irregularly sampled measurements. The estimate is formed by assuming that the spectrum is smooth and will vary slowly over the frequency grids, such that the spectral density inside any given rectangle in the spectral grid may be approximated well as a plane. Using this framework, the 2-D spectrum is estimated by finding the solution to a convex covariance fitting problem, which has an analytic solution. Numerical simulations indicate the achievable performance gain as compared to the Blackman-Tukey estimator.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Wiener filtering in the windowed DFT domain.\n \n \n \n \n\n\n \n Bensaid, S.; and Slock, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 501-505, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"WienerPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952139,\n  author = {S. Bensaid and D. Slock},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Wiener filtering in the windowed DFT domain},\n  year = {2014},\n  pages = {501-505},\n  abstract = {We focus on the use of windows in the frequency domain processing of data for the purpose of Wiener filtering. Classical frequency domain asymptotics replace linear convolution by circulant convolution, leading to approximation errors. The introduction of windows can lead to slightly more complex frequency domain techniques, replacing diagonal matrices by banded matrices, but with controlled approximation error. Other work observed this recently, proposing general banded matrices in the frequency domain for filtering. Here, we emphasize the design of a window to optimize the banded approximation, and more importantly, we show that the whole banded matrix is in fact still parametrized by a diagonal matrix, which facilitates estimation. We propose here both some non-parametric and parametric approaches for estimating the diagonal spectral parts and revisit in particular the effect of the window on frequency domain Recursive Least-Squares (RLS) adaptive filtering.},\n  keywords = {adaptive filters;convolution;discrete Fourier transforms;least squares approximations;recursive estimation;Wiener filters;frequency domain recursive least squares adaptive filtering;banded matrix;controlled approximation error;approximation errors;circulant convolution;linear convolution;classical frequency domain asymptotics;frequency domain processing;windowed DFT domain;Wiener filtering;Frequency-domain analysis;Discrete Fourier transforms;Vectors;Approximation methods;Speech;Complexity theory;frequency domain filtering;DFT;FFT;window;periodogram;recursive least-squares;adaptive filtering},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569927395.pdf},\n}\n\n
\n
\n\n\n
\n We focus on the use of windows in the frequency domain processing of data for the purpose of Wiener filtering. Classical frequency domain asymptotics replace linear convolution by circulant convolution, leading to approximation errors. The introduction of windows can lead to slightly more complex frequency domain techniques, replacing diagonal matrices by banded matrices, but with controlled approximation error. Other work observed this recently, proposing general banded matrices in the frequency domain for filtering. Here, we emphasize the design of a window to optimize the banded approximation, and more importantly, we show that the whole banded matrix is in fact still parametrized by a diagonal matrix, which facilitates estimation. We propose here both some non-parametric and parametric approaches for estimating the diagonal spectral parts and revisit in particular the effect of the window on frequency domain Recursive Least-Squares (RLS) adaptive filtering.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Recognition of acoustic events using deep neural networks.\n \n \n \n \n\n\n \n Gencoglu, O.; Virtanen, T.; and Huttunen, H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 506-510, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RecognitionPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952140,\n  author = {O. Gencoglu and T. Virtanen and H. Huttunen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Recognition of acoustic events using deep neural networks},\n  year = {2014},\n  pages = {506-510},\n  abstract = {This paper proposes the use of a deep neural network for the recognition of isolated acoustic events such as footsteps, baby crying, motorcycle, rain etc. For an acoustic event classification task containing 61 distinct classes, classification accuracy of the neural network classifier (60.3%) excels that of the conventional Gaussian mixture model based hidden Markov model classifier (54.8%). In addition, an unsupervised layer-wise pretraining followed by standard backpropagation training of a deep network (known as a deep belief network) results in further increase of 2-4% in classification accuracy. Effects of implementation parameters such as types of features and number of adjacent frames as additional features are found to be significant on classification accuracy.},\n  keywords = {acoustic signal processing;backpropagation;Gaussian processes;hidden Markov models;mixture models;neural nets;signal classification;unsupervised learning;acoustic event recognition;deep neural networks;acoustic event classification task;neural network classifier;Gaussian mixture model;hidden Markov model classifier;unsupervised layer-wise pretraining;standard backpropagation training;deep belief network;adjacent frames;Acoustics;Accuracy;Training;Artificial neural networks;Hidden Markov models;Feature extraction;acoustic event classification;artificial neural networks;deep belief networks;deep neural networks;pattern classification},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924623.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes the use of a deep neural network for the recognition of isolated acoustic events such as footsteps, baby crying, motorcycle, rain etc. For an acoustic event classification task containing 61 distinct classes, classification accuracy of the neural network classifier (60.3%) excels that of the conventional Gaussian mixture model based hidden Markov model classifier (54.8%). In addition, an unsupervised layer-wise pretraining followed by standard backpropagation training of a deep network (known as a deep belief network) results in further increase of 2-4% in classification accuracy. Effects of implementation parameters such as types of features and number of adjacent frames as additional features are found to be significant on classification accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n ECG analysis using consensus clustering.\n \n \n \n \n\n\n \n Lourenço, A.; Carreiras, C.; Bulò, S. R.; and Fred, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 511-515, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ECGPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952141,\n  author = {A. Lourenço and C. Carreiras and S. R. Bulò and A. Fred},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {ECG analysis using consensus clustering},\n  year = {2014},\n  pages = {511-515},\n  abstract = {Biosignals analysis has become widespread, upstaging their typical use in clinical settings. Electrocardiography (ECG) plays a central role in patient monitoring as a diagnosis tool in today's medicine and as an emerging biometric trait. In this paper we adopt a consensus clustering approach for the unsupervised analysis of an ECG-based biometric records. This type of analysis highlights natural groups within the population under investigation, which can be correlated with ground truth information in order to gain more insights about the data. Preliminary results are promising, for meaningful clusters are extracted from the population under analysis.},\n  keywords = {electrocardiography;medical signal processing;patient monitoring;pattern clustering;ECG analysis;biosignal analysis;electrocardiography;patient monitoring;diagnosis tool;biometric trait;consensus clustering approach;ECG-based biometric records;extracted clusters;Electrocardiography;Clustering algorithms;Feature extraction;Heart beat;Heart rate variability;Partitioning algorithms;ECG analysis;ECG-based biometrics;consensus clustering;evidence accumulation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926755.pdf},\n}\n\n
\n
\n\n\n
\n Biosignals analysis has become widespread, upstaging their typical use in clinical settings. Electrocardiography (ECG) plays a central role in patient monitoring as a diagnosis tool in today's medicine and as an emerging biometric trait. In this paper we adopt a consensus clustering approach for the unsupervised analysis of an ECG-based biometric records. This type of analysis highlights natural groups within the population under investigation, which can be correlated with ground truth information in order to gain more insights about the data. Preliminary results are promising, for meaningful clusters are extracted from the population under analysis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the information-theoretic limits of graphical model selection for Gaussian time series.\n \n \n \n \n\n\n \n Hannak, G.; Jung, A.; and Goertz, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 516-520, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952142,\n  author = {G. Hannak and A. Jung and N. Goertz},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {On the information-theoretic limits of graphical model selection for Gaussian time series},\n  year = {2014},\n  pages = {516-520},\n  abstract = {We consider the problem of inferring the conditional independence graph (CIG) of a multivariate stationary dicrete-time Gaussian random process based on a finite length observation. Using information-theoretic methods, we derive a lower bound on the error probability of any learning scheme for the underlying process CIG. This bound, in turn, yields a minimum required sample-size which is necessary for any algorithm regardless of its computational complexity, to reliably select the true underlying CIG. Furthermore, by analysis of a simple selection scheme, we show that the information-theoretic limits can be achieved for a subclass of processes having sparse CIG. We do not assume a parametric model for the observed process, but require it to have a sufficiently smooth spectral density matrix (SDM).},\n  keywords = {computational complexity;Gaussian processes;graph theory;information theory;matrix algebra;time series;information-theoretic limits;graphical model selection;Gaussian time series;conditional independence graph;CIG;multivariate stationary dicrete-time Gaussian random process;finite length observation;error probability;learning scheme;computational complexity;smooth spectral density matrix;SDM;Graphical models;Time series analysis;Covariance matrices;Reliability;Vectors;Indexes;Correlation;CIG;Fano-inequality;stationary time series},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924169.pdf},\n}\n\n
\n
\n\n\n
\n We consider the problem of inferring the conditional independence graph (CIG) of a multivariate stationary dicrete-time Gaussian random process based on a finite length observation. Using information-theoretic methods, we derive a lower bound on the error probability of any learning scheme for the underlying process CIG. This bound, in turn, yields a minimum required sample-size which is necessary for any algorithm regardless of its computational complexity, to reliably select the true underlying CIG. Furthermore, by analysis of a simple selection scheme, we show that the information-theoretic limits can be achieved for a subclass of processes having sparse CIG. We do not assume a parametric model for the observed process, but require it to have a sufficiently smooth spectral density matrix (SDM).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A frequency method for blind separation of an anechoic mixture.\n \n \n \n \n\n\n \n Ouedraogo, W. S. B.; Nicolas, B.; Oudompheng, B.; Mars, J. I.; and Jutten, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 521-525, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952143,\n  author = {W. S. B. Ouedraogo and B. Nicolas and B. Oudompheng and J. I. Mars and C. Jutten},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A frequency method for blind separation of an anechoic mixture},\n  year = {2014},\n  pages = {521-525},\n  abstract = {This paper presents a new frequency method for blind separation of mixtures of scaled and delayed versions of sources. This kind of problem can occur in air and underwater acoustics. By assuming the mutual independence of the sources, we make use of the power spectral densities and the cross power spectral densities of mixed data to estimate the sources, the mixing coefficients, and the relative delays between a reference sensor and the other sensors. Simulations on synthetic data of sound radiated by a ship show the effectiveness of the proposed method.},\n  keywords = {acoustic signal processing;blind source separation;frequency-domain analysis;underwater sound;frequency method;blind separation;anechoic mixture;mutual independence;cross power spectral densities;underwater acoustics;air acoustics;Delays;Fourier transforms;Estimation;Time-frequency analysis;Noise;Equations;Frequency estimation;Blind source separation;Anechoic mixture;Relative delays;Power spectral density;Ship noise;Underwater acoustics},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923327.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a new frequency method for blind separation of mixtures of scaled and delayed versions of sources. This kind of problem can occur in air and underwater acoustics. By assuming the mutual independence of the sources, we make use of the power spectral densities and the cross power spectral densities of mixed data to estimate the sources, the mixing coefficients, and the relative delays between a reference sensor and the other sensors. Simulations on synthetic data of sound radiated by a ship show the effectiveness of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse representation and least squares-based classification in face recognition.\n \n \n \n \n\n\n \n Iliadis, M.; Spinoulas, L.; Berahas, A. S.; Wang, H.; and Katsaggelos, A. K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 526-530, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952144,\n  author = {M. Iliadis and L. Spinoulas and A. S. Berahas and H. Wang and A. K. Katsaggelos},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse representation and least squares-based classification in face recognition},\n  year = {2014},\n  pages = {526-530},\n  abstract = {In this paper we present a novel approach to face recognition. We propose an adaptation and extension to the state-of-the-art methods in face recognition, such as sparse representation-based classification and its extensions. Effectively, our method combines the sparsity-based approaches with additional least-squares steps and exhitbits robustness to outliers achieving significant performance improvement with little additional cost. This approach also mitigates the need for a large number of training images since it proves robust to varying number of training samples.},\n  keywords = {face recognition;image classification;image representation;least squares approximations;least squares-based classification;sparse representation;face recognition;Training;Face recognition;Databases;Face;Dictionaries;Vectors;Robustness;Face recognition;sparse representation;classification},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569927015.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we present a novel approach to face recognition. We propose an adaptation and extension to the state-of-the-art methods in face recognition, such as sparse representation-based classification and its extensions. Effectively, our method combines the sparsity-based approaches with additional least-squares steps and exhitbits robustness to outliers achieving significant performance improvement with little additional cost. This approach also mitigates the need for a large number of training images since it proves robust to varying number of training samples.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compression of microarray images using a binary tree decomposition.\n \n \n \n \n\n\n \n Matos, L. M. O.; Neves, A. J. R.; and Pinho, A. J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 531-535, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CompressionPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952145,\n  author = {L. M. O. Matos and A. J. R. Neves and A. J. Pinho},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Compression of microarray images using a binary tree decomposition},\n  year = {2014},\n  pages = {531-535},\n  abstract = {This paper proposes a lossless compression method for microarray images, based on a hierarchical organization of the intensity levels followed by finite-context modeling. A similar approach was recently applied to medical images with success. The goal of this work was to further extend, adapt and evaluate this approach to the special case of microarray images. We performed simulations on seven different data sets (total of 254 images). On average, the proposed method attained ~ 9% better results when compared to the best compression standard (JPEG-LS).},\n  keywords = {data compression;image coding;lab-on-a-chip;medical image processing;trees (mathematics);DNA microarray imaging;lossless compression method;microarray images;hierarchical organization;intensity levels;finite-context modeling;medical images;JPEG-LS;binary tree decomposition;Image coding;Context;Decoding;Binary trees;Codecs;Standards organizations;Binary tree decomposition;microarray images;lossless compression},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925195.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a lossless compression method for microarray images, based on a hierarchical organization of the intensity levels followed by finite-context modeling. A similar approach was recently applied to medical images with success. The goal of this work was to further extend, adapt and evaluate this approach to the special case of microarray images. We performed simulations on seven different data sets (total of 254 images). On average, the proposed method attained   9% better results when compared to the best compression standard (JPEG-LS).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient joint multiscale decomposition for color stereo image coding.\n \n \n \n \n\n\n \n Dhifallah, O.; Kaaniche, M.; and Benazza-Benyahia, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 536-540, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952146,\n  author = {O. Dhifallah and M. Kaaniche and A. Benazza-Benyahia},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient joint multiscale decomposition for color stereo image coding},\n  year = {2014},\n  pages = {536-540},\n  abstract = {With the recent advances in stereoscopic display technologies, the demand for developing efficient stereo image coding techniques has increased. While most of the existing approaches have been proposed and studied in the case of monochrome stereo images, we are interested in this paper in compressing color stereo data. More precisely, we design a multiscale decomposition, based on the concept of vector lifting scheme, that jointly exploits the cross-view and inter-color channels redundancies. Moreover, our decomposition is well adapted to the contents of these data. Experimental results performed on natural stereo images show the benefits which can be drawn from the proposed coding method.},\n  keywords = {data compression;image coding;image colour analysis;stereo image processing;three-dimensional displays;efficient joint multiscale decomposition;stereoscopic display technology;color stereo image coding techniques;monochrome stereo images;color stereo data compression;vector lifting scheme;inter-color channel redundancy;cross-view channel redundancy;natural stereo images;Image color analysis;Image coding;Encoding;Silicon;Stereo vision;Correlation;Redundancy;Color stereoscopic image;disparity estimation;compression;joint coding schemes;lifting scheme},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925317.pdf},\n}\n\n
\n
\n\n\n
\n With the recent advances in stereoscopic display technologies, the demand for developing efficient stereo image coding techniques has increased. While most of the existing approaches have been proposed and studied in the case of monochrome stereo images, we are interested in this paper in compressing color stereo data. More precisely, we design a multiscale decomposition, based on the concept of vector lifting scheme, that jointly exploits the cross-view and inter-color channels redundancies. Moreover, our decomposition is well adapted to the contents of these data. Experimental results performed on natural stereo images show the benefits which can be drawn from the proposed coding method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Coding mode decision algorithm for binary descriptor coding.\n \n \n \n \n\n\n \n Monteiro, P.; and Ascenso, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 541-545, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CodingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952147,\n  author = {P. Monteiro and J. Ascenso},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Coding mode decision algorithm for binary descriptor coding},\n  year = {2014},\n  pages = {541-545},\n  abstract = {In visual sensor networks, local feature descriptors can be computed at the sensing nodes, which work collaboratively on the data obtained to make an efficient visual analysis. In fact, with a minimal amount of computational effort, the detection and extraction of local features, such as binary descriptors, can provide a reliable and compact image representation. In this paper, it is proposed to extract and code binary descriptors to meet the energy and bandwidth constraints at each sensing node. The major contribution is a binary descriptor coding technique that exploits the correlation using two different coding modes: Intra, which exploits the correlation between the elements that compose a descriptor; and Inter, which exploits the correlation between descriptors of the same image. The experimental results show bitrate savings up to 35% without any impact in the performance efficiency of the image retrieval task.},\n  keywords = {feature extraction;image coding;image fusion;image representation;image retrieval;coding mode decision algorithm;visual sensor networks;local feature descriptors;sensing nodes;visual analysis;computational effort;local feature extraction;local feature detection;compact image representation;binary descriptor extraction;energy constraint;bandwidth constraint;sensing node;binary descriptor coding technique;coding mode;image retrieval task;Encoding;Visualization;Bit rate;Correlation;Image coding;Feature extraction;Sensors;Local features;feature coding;binary descriptors;mode decision;visual sensor networks},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569927389.pdf},\n}\n\n
\n
\n\n\n
\n In visual sensor networks, local feature descriptors can be computed at the sensing nodes, which work collaboratively on the data obtained to make an efficient visual analysis. In fact, with a minimal amount of computational effort, the detection and extraction of local features, such as binary descriptors, can provide a reliable and compact image representation. In this paper, it is proposed to extract and code binary descriptors to meet the energy and bandwidth constraints at each sensing node. The major contribution is a binary descriptor coding technique that exploits the correlation using two different coding modes: Intra, which exploits the correlation between the elements that compose a descriptor; and Inter, which exploits the correlation between descriptors of the same image. The experimental results show bitrate savings up to 35% without any impact in the performance efficiency of the image retrieval task.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Design of optimized contourlet filters for improved coding gain.\n \n \n \n \n\n\n \n Gehrke, T.; and Greiner, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 546-550, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DesignPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952148,\n  author = {T. Gehrke and T. Greiner},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Design of optimized contourlet filters for improved coding gain},\n  year = {2014},\n  pages = {546-550},\n  abstract = {The separable wavelet transform has limited directional sensitivity and is suboptimal for compression of textured images. A finer directional resolution and better coding results can be achieved by contourlet transform. So far, directional filters based on design criteria that are unspecific to image compression were used for contourlet transform. We propose directional filters that are optimized specifically for image coding. Thereto, a filter design method that is based on maximization of coding gain was developed. Directional filters were designed for all images of two standard test image databases and compared experimentally to standard filters. In most cases the newly designed filters performed better than standard filters.},\n  keywords = {data compression;filtering theory;image coding;image resolution;image texture;wavelet transforms;optimized contourlet filter design;improved image coding gain;separable wavelet transform;limited directional sensitivity;textured image compression;directional resolution;contourlet transform;directional filters;design criteria;filter design method;standard test image databases;Image coding;Standards;Wavelet transforms;PSNR;Image reconstruction;Optimization;Contourlets;image coding;filter design},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925579.pdf},\n}\n\n
\n
\n\n\n
\n The separable wavelet transform has limited directional sensitivity and is suboptimal for compression of textured images. A finer directional resolution and better coding results can be achieved by contourlet transform. So far, directional filters based on design criteria that are unspecific to image compression were used for contourlet transform. We propose directional filters that are optimized specifically for image coding. Thereto, a filter design method that is based on maximization of coding gain was developed. Directional filters were designed for all images of two standard test image databases and compared experimentally to standard filters. In most cases the newly designed filters performed better than standard filters.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Image quality assessment based on detail differences.\n \n \n \n \n\n\n \n Di Claudio, E. D.; and Jacovitti, G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 551-555, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ImagePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952149,\n  author = {E. D. {Di Claudio} and G. Jacovitti},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Image quality assessment based on detail differences},\n  year = {2014},\n  pages = {551-555},\n  abstract = {This paper presents a novel Full Reference method for image quality assessment based on two indices measuring respectively detail loss and spurious detail addition. These indices define a two dimensional (2D) state in a Virtual Cognitive State (VCS) space. The quality estimation is obtained as a 2D function of the VCS, empirically determined via polynomial fitting of DMOS values of training images. The method provides at the same time highly accurate DMOS estimates, and a quantitative account of the causes of quality degradation.},\n  keywords = {estimation theory;image processing;polynomial approximation;image quality assessment;detail differences;full reference method;two dimensional state;virtual cognitive state space;VCS space;quality estimation;2D function;polynomial fitting;DMOS values;training images;DMOS estimates;Image quality;Fitting;Transform coding;Loss measurement;Noise;Databases;Image quality assessment;gradient tensor;virtual cognitive space;VICOM;detail analysis},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925553.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a novel Full Reference method for image quality assessment based on two indices measuring respectively detail loss and spurious detail addition. These indices define a two dimensional (2D) state in a Virtual Cognitive State (VCS) space. The quality estimation is obtained as a 2D function of the VCS, empirically determined via polynomial fitting of DMOS values of training images. The method provides at the same time highly accurate DMOS estimates, and a quantitative account of the causes of quality degradation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint sic and multi-relay selection algorithms for cooperative DS-CDMA systems.\n \n \n \n \n\n\n \n Gu, J.; and de Lamare , R. C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 556-560, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952150,\n  author = {J. Gu and R. C. {de Lamare}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Joint sic and multi-relay selection algorithms for cooperative DS-CDMA systems},\n  year = {2014},\n  pages = {556-560},\n  abstract = {In this work, we propose a cross-layer design strategy based on a joint successive interference cancellation (SIC) detection technique and a multi-relay selection algorithm for the uplink of cooperative direct-sequence code-division multiple access (DS-CDMA) systems. We devise a low-cost greedy list-based SIC (GL-SIC) strategy with RAKE receivers as the front-end that can approach the maximum likelihood detector performance. We also present a low-complexity multi-relay selection algorithm based on greedy techniques that can approach the performance of an exhaustive search. Simulations show an excellent bit error rate performance of the proposed detection and relay selection algorithms as compared to existing techniques.},\n  keywords = {code division multiple access;cooperative communication;error statistics;greedy algorithms;interference suppression;maximum likelihood detection;radio receivers;relay networks (telecommunication);spread spectrum communication;bit error rate;exhaustive search;greedy techniques;maximum likelihood detector;RAKE receivers;GL-SIC;greedy list-based SIC;cross-layer design;cooperative DS-CDMA systems;direct-sequence code-division multiple access systems;multirelay selection algorithms;joint successive interference cancellation detection technique;Relays;Interference;Signal to noise ratio;Vectors;Detectors;Silicon carbide;Multiaccess communication;DS-CDMA;cooperative systems;relay selection;greedy algorithms;SIC detection},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569918983.pdf},\n}\n\n
\n
\n\n\n
\n In this work, we propose a cross-layer design strategy based on a joint successive interference cancellation (SIC) detection technique and a multi-relay selection algorithm for the uplink of cooperative direct-sequence code-division multiple access (DS-CDMA) systems. We devise a low-cost greedy list-based SIC (GL-SIC) strategy with RAKE receivers as the front-end that can approach the maximum likelihood detector performance. We also present a low-complexity multi-relay selection algorithm based on greedy techniques that can approach the performance of an exhaustive search. Simulations show an excellent bit error rate performance of the proposed detection and relay selection algorithms as compared to existing techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Enhancing the MIMO channel capacity in Manhattan-like scenarios.\n \n \n \n \n\n\n \n Sousa, I.; Queluz, M. P.; and Rodrigues, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 561-565, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EnhancingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952151,\n  author = {I. Sousa and M. P. Queluz and A. Rodrigues},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Enhancing the MIMO channel capacity in Manhattan-like scenarios},\n  year = {2014},\n  pages = {561-565},\n  abstract = {In this paper the channel capacity of Multiple Input Multiple Output (MIMO) wireless systems within a Manhattan-like scenario is studied. Three Base Stations (BSs) placement models are proposed in this work, so as to enhance the channel capacity of the wireless system. The evaluation of the proposed BSs arrangements is performed using a simulator with realistic underlying models (test scenario, radio propagation and mobility models). Simulation results show that all the proposed placement models have a superior performance when compared with the traditional BSs placement model. In particular, one of the proposed BSs dispositions requires the use of less BSs, which means greener communications and less hardware costs.},\n  keywords = {channel capacity;MIMO communication;MIMO channel capacity;Manhattan-like scenarios;multiple input multiple output wireless systems;base station placement models;BS placement model;MIMO;Channel capacity;Wireless communication;Arrays;Biological system modeling;Antenna arrays;Buildings;Channel Capacity;Manhattan-like Scenarios;Radio Resource Management;Wireless Networks},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569951083.pdf},\n}\n\n
\n
\n\n\n
\n In this paper the channel capacity of Multiple Input Multiple Output (MIMO) wireless systems within a Manhattan-like scenario is studied. Three Base Stations (BSs) placement models are proposed in this work, so as to enhance the channel capacity of the wireless system. The evaluation of the proposed BSs arrangements is performed using a simulator with realistic underlying models (test scenario, radio propagation and mobility models). Simulation results show that all the proposed placement models have a superior performance when compared with the traditional BSs placement model. In particular, one of the proposed BSs dispositions requires the use of less BSs, which means greener communications and less hardware costs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Channel simulation with large-scale time evolution in irregular cellular networks.\n \n \n \n \n\n\n \n Kayili, L.; and Sousa, E.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 566-570, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ChannelPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952152,\n  author = {L. Kayili and E. Sousa},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Channel simulation with large-scale time evolution in irregular cellular networks},\n  year = {2014},\n  pages = {566-570},\n  abstract = {We consider a cellular network where base stations with widely different power capabilities (power subclasses) are deployed in a highly inhomogeneous or irregular pattern-referred to in this work as an irregular cellular network. A simulation framework with slow scale time variation appropriate for irregular networks is proposed. A relevant resource allocation framework as well as shadowing and path loss models are discussed. Finally, the time evolution methodology is detailed. It is believed that the proposed simulation framework will be important in the evaluation of slowly adaptive algorithms such as those studied as part of 3GPP LTE Self Organizing Networks (SON).},\n  keywords = {3G mobile communication;cellular radio;resource allocation;channel simulation;large scale time evolution;irregular cellular networks;resource allocation framework;time evolution methodology;3GPP LTE self organizing networks;Shadow mapping;Adaptation models;Resource management;Long Term Evolution;Fading;Heuristic algorithms;Wireless communication;autonomous cellular networks;irregular cellular networks;wireless channel simulation;self organizing networks (SON);wireless network dynamics},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569951089.pdf},\n}\n\n
\n
\n\n\n
\n We consider a cellular network where base stations with widely different power capabilities (power subclasses) are deployed in a highly inhomogeneous or irregular pattern-referred to in this work as an irregular cellular network. A simulation framework with slow scale time variation appropriate for irregular networks is proposed. A relevant resource allocation framework as well as shadowing and path loss models are discussed. Finally, the time evolution methodology is detailed. It is believed that the proposed simulation framework will be important in the evaluation of slowly adaptive algorithms such as those studied as part of 3GPP LTE Self Organizing Networks (SON).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Advanced interference reduction in NC-OFDM based Cognitive Radio with Cancellation Carriers.\n \n \n \n \n\n\n \n Kryszkiewicz, P.; and Bogucka, H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 571-575, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AdvancedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952173,\n  author = {P. Kryszkiewicz and H. Bogucka},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Advanced interference reduction in NC-OFDM based Cognitive Radio with Cancellation Carriers},\n  year = {2014},\n  pages = {571-575},\n  abstract = {Reduction of the out-of-band (OOB) emission is essential for Cognitive Radio (CR) systems to enable coexistence with licensed (primary) systems operating in the adjacent frequency bands. This paper proposes an algorithm for the Non Contiguous Orthogonal Frequency Division Multiplexing (NC-OFDM)-based CR, to reduce the interference caused by both OOB radiation and by non-ideal frequency selectivity of a primary user (PU) receiver. It is based on a concept to use a set of subcarriers called Cancellation Carriers (CCs). By being aware of the PU's carrier frequency, the observed interference power can by decreased by about 10 dB in comparison with the standard OOB-power minimizing algorithms.},\n  keywords = {cognitive radio;interference suppression;OFDM modulation;interference power;primary user receiver;nonideal frequency selectivity;OOB radiation;noncontiguous orthogonal frequency division multiplexing;adjacent frequency bands;OOB emission reduction;out-of-band emission reduction;cancellation carriers;cognitive radio;NC-OFDM;interference reduction;Interference;OFDM;Receivers;Vectors;Standards;Minimization;Cognitive radio;cognitive radio;enhanced OFDM;Out-of-Band radiation;cancellation carriers;NC-OFDM},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921575.pdf},\n}\n\n
\n
\n\n\n
\n Reduction of the out-of-band (OOB) emission is essential for Cognitive Radio (CR) systems to enable coexistence with licensed (primary) systems operating in the adjacent frequency bands. This paper proposes an algorithm for the Non Contiguous Orthogonal Frequency Division Multiplexing (NC-OFDM)-based CR, to reduce the interference caused by both OOB radiation and by non-ideal frequency selectivity of a primary user (PU) receiver. It is based on a concept to use a set of subcarriers called Cancellation Carriers (CCs). By being aware of the PU's carrier frequency, the observed interference power can by decreased by about 10 dB in comparison with the standard OOB-power minimizing algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A multi-threshold feedback scheme for cognitive radio networks based on opportunistic beamforming.\n \n \n \n \n\n\n \n Massaoudi, A.; Sellami, N.; and Siala, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 576-580, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952174,\n  author = {A. Massaoudi and N. Sellami and M. Siala},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A multi-threshold feedback scheme for cognitive radio networks based on opportunistic beamforming},\n  year = {2014},\n  pages = {576-580},\n  abstract = {Cognitive radio is a promising technique for efficient spectrum utilization in wireless systems. In multi-user Multiple- Input Multiple-Output (MIMO) system, a large amount of feedback information has to be used to achieve multi-user diversity. In this paper, in order to reduce the feedback amount and hence the wasted energy, we propose a novel scheduling scheme of secondary users (SUs) for an underlay cognitive radio network. Our scheme is based on opportunistic beamforming and employs multiple feedback thresholds. The lowest threshold is chosen to insure a predefined allowed scheduling outage probability. A scheduling outage event occurs if at least one transmit beam has no feedback information. The other thresholds are chosen in order to reduce the number of SUs feeding back their maximum signal to interference plus noise ratio (SINR) (and hence the wasted energy) and the delay due to the number of attempts. We show via simulations that a significant gain in terms of energy is obtained at the price of a reasonable delay.},\n  keywords = {cognitive radio;MIMO communication;probability;radio networks;radio spectrum management;radiofrequency interference;multithreshold feedback scheme;opportunistic beamforming;spectrum utilization;wireless systems;multiuser multiple-input multiple-output system;MIMO system;multiuser diversity;secondary users;cognitive radio network;multiple feedback thresholds;outage probability;SU;signal to interference plus noise ratio;SINR;Interference;Signal to noise ratio;Cognitive radio;Array signal processing;Transmitting antennas;Receiving antennas;Scheduling;Cognitive radio;opportunistic beamforming;secondary users scheduling;multi-threshold feedback scheme},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925441.pdf},\n}\n\n
\n
\n\n\n
\n Cognitive radio is a promising technique for efficient spectrum utilization in wireless systems. In multi-user Multiple- Input Multiple-Output (MIMO) system, a large amount of feedback information has to be used to achieve multi-user diversity. In this paper, in order to reduce the feedback amount and hence the wasted energy, we propose a novel scheduling scheme of secondary users (SUs) for an underlay cognitive radio network. Our scheme is based on opportunistic beamforming and employs multiple feedback thresholds. The lowest threshold is chosen to insure a predefined allowed scheduling outage probability. A scheduling outage event occurs if at least one transmit beam has no feedback information. The other thresholds are chosen in order to reduce the number of SUs feeding back their maximum signal to interference plus noise ratio (SINR) (and hence the wasted energy) and the delay due to the number of attempts. We show via simulations that a significant gain in terms of energy is obtained at the price of a reasonable delay.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-power active interference cancellation for OFDM spectrum sculpting.\n \n \n \n \n\n\n \n Schmidt, J. F.; Romero, D.; and López-Valcarce, R.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 581-585, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Low-powerPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952175,\n  author = {J. F. Schmidt and D. Romero and R. López-Valcarce},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Low-power active interference cancellation for OFDM spectrum sculpting},\n  year = {2014},\n  pages = {581-585},\n  abstract = {We present a low-power design for Active Interference Cancellation (AIC) sculpting of the OFDM spectrum, based on sparse design concepts. Optimal AIC designs compute cancellation weights based on contributions from all data subcarriers. Thus, as the number of subcarriers grows, power consumption becomes a concern, and suboptimal solutions that avoid involving all subcarriers are of interest. In this context, we present novel sparse AIC designs based on a zero-norm minimization of the matrix defining the cancellation weights. These designs drastically reduce the number of operations per symbol, and thus the power consumption, while allowing to tune the loss with respect the optimal design. They can be efficiently obtained and significantly outperform usual thresholding or sparsity-inducing ℓ1-norm minimization approaches.},\n  keywords = {interference suppression;minimisation;OFDM modulation;low-power active interference cancellation;OFDM spectrum sculpting;cancellation weights;data subcarriers;power consumption;suboptimal solutions;sparse AIC designs;zero-norm minimization;OFDM;Minimization;Power demand;Sparse matrices;Interference cancellation;Optimized production technology;Algorithm design and analysis},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569912103.pdf},\n}\n\n
\n
\n\n\n
\n We present a low-power design for Active Interference Cancellation (AIC) sculpting of the OFDM spectrum, based on sparse design concepts. Optimal AIC designs compute cancellation weights based on contributions from all data subcarriers. Thus, as the number of subcarriers grows, power consumption becomes a concern, and suboptimal solutions that avoid involving all subcarriers are of interest. In this context, we present novel sparse AIC designs based on a zero-norm minimization of the matrix defining the cancellation weights. These designs drastically reduce the number of operations per symbol, and thus the power consumption, while allowing to tune the loss with respect the optimal design. They can be efficiently obtained and significantly outperform usual thresholding or sparsity-inducing ℓ1-norm minimization approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Characterization of SDR/CR front-ends for improved digital signal processing algorithms.\n \n \n \n \n\n\n \n Ribeiro, D. C.; Cruz, P. M.; and Carvalho, N. B.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 586-590, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CharacterizationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952176,\n  author = {D. C. Ribeiro and P. M. Cruz and N. B. Carvalho},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Characterization of SDR/CR front-ends for improved digital signal processing algorithms},\n  year = {2014},\n  pages = {586-590},\n  abstract = {This paper will demonstrate the importance of performing unified mixed-signal measurement and characterization procedures. It will be shown that a joint analog and digital analysis has a strong impact on future (radio-frequency) RF components and devices. This is mostly due to the fact that today's circuits and systems are evolving in the way of integration into a single module, which makes the separation of analog and digital analysis impossible to be done. Some details about mixed-signal instrumentation are introduced by showing representative laboratory measurement arrangements. They allow to obtain correlated information of analog and digital portions of mixed-signal systems, which are essential to retrieve the commonly named transfer functions. This information will make possible to produce better designs of the entire radio front-end, as well as, the implementation of optimized digital signal processing (DSP) algorithms to compensate the analog impairments.},\n  keywords = {signal processing;transfer functions;digital signal processing algorithm;SDR-CR front-ends;unified mixed-signal measurement;joint analog and digital analysis;RF components;mixed-signal instrumentation;transfer functions;Radio frequency;Instruments;System-on-chip;Digital signal processing;Radio transmitters;Receivers;Frequency measurement;Digital signal processing;linear measurements;mixed-signal instrumentation;nonlinear characterization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925543.pdf},\n}\n\n
\n
\n\n\n
\n This paper will demonstrate the importance of performing unified mixed-signal measurement and characterization procedures. It will be shown that a joint analog and digital analysis has a strong impact on future (radio-frequency) RF components and devices. This is mostly due to the fact that today's circuits and systems are evolving in the way of integration into a single module, which makes the separation of analog and digital analysis impossible to be done. Some details about mixed-signal instrumentation are introduced by showing representative laboratory measurement arrangements. They allow to obtain correlated information of analog and digital portions of mixed-signal systems, which are essential to retrieve the commonly named transfer functions. This information will make possible to produce better designs of the entire radio front-end, as well as, the implementation of optimized digital signal processing (DSP) algorithms to compensate the analog impairments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n DSP-based suppression of spurious emissions at RX band in carrier aggregation FDD transceivers.\n \n \n \n \n\n\n \n Kiayani, A.; Abdelaziz, M.; Anttila, L.; Lehtinen, V.; and Valkama, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 591-595, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DSP-basedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952177,\n  author = {A. Kiayani and M. Abdelaziz and L. Anttila and V. Lehtinen and M. Valkama},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {DSP-based suppression of spurious emissions at RX band in carrier aggregation FDD transceivers},\n  year = {2014},\n  pages = {591-595},\n  abstract = {In frequency division duplex transceivers employing non-contiguous carrier aggregation (CA) transmission, achieving sufficient isolation between transmit and receive chains using radio frequency filtering alone is increasingly difficult. Particularly challenging problem in this context is spurious intermodulation (IM) components due to nonlinear power amplifier (PA), which may easily overlap the receiver band. With realistic duplex filters, the residual spurious IM at RX band can be several dBs stronger than the thermal noise floor, leading to own receiver desensitization. In this paper, we carry out detailed signal modeling of spurious emissions due to wideband PAs at the third-order IM band. Stemming from this modeling, and using the known transmit data, we present an efficient nonlinear digital identification and cancellation technique to suppress the unwanted IM components at RX band. The proposed technique is verified with computer simulations, showing excellent calibration properties, hence relaxing filtering and duplexing distance requirements in spectrally-agile CA transceivers.},\n  keywords = {digital signal processing chips;filtering theory;frequency division multiplexing;intermodulation;radio transceivers;radiofrequency integrated circuits;radiofrequency power amplifiers;wideband amplifiers;DSP-based suppression;spurious emission suppression;RX band;carrier aggregation FDD transceivers;frequency division duplex transceivers;noncontiguous carrier aggregation transmission;CA transmission;receive chains;transmit chains;radio frequency filtering;spurious intermodulation components;nonlinear power amplifier;residual spurious IM;realistic duplex filters;receiver band;thermal noise floor;receiver desensitization;signal modeling;wideband PAs;third-order IM band;nonlinear digital identification;digital cancellation technique;unwanted IM component suppression;computer simulations;relaxing filtering;duplexing distance;spectrally-agile CA transceivers;calibration properties;Transceivers;Abstracts;Cancellation;carrier aggregation;duplexer isolation;frequency division duplexing;intermodulation;LTE-Advanced;power amplifier;spurious emissions},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569914303.pdf},\n}\n\n
\n
\n\n\n
\n In frequency division duplex transceivers employing non-contiguous carrier aggregation (CA) transmission, achieving sufficient isolation between transmit and receive chains using radio frequency filtering alone is increasingly difficult. Particularly challenging problem in this context is spurious intermodulation (IM) components due to nonlinear power amplifier (PA), which may easily overlap the receiver band. With realistic duplex filters, the residual spurious IM at RX band can be several dBs stronger than the thermal noise floor, leading to own receiver desensitization. In this paper, we carry out detailed signal modeling of spurious emissions due to wideband PAs at the third-order IM band. Stemming from this modeling, and using the known transmit data, we present an efficient nonlinear digital identification and cancellation technique to suppress the unwanted IM components at RX band. The proposed technique is verified with computer simulations, showing excellent calibration properties, hence relaxing filtering and duplexing distance requirements in spectrally-agile CA transceivers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Wideband spectrum sensing for Cognitive Radio.\n \n \n \n \n\n\n \n Vieira, J. M. N.; Tomé, A. M.; and Malafaia, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 596-600, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"WidebandPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952178,\n  author = {J. M. N. Vieira and A. M. Tomé and D. Malafaia},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Wideband spectrum sensing for Cognitive Radio},\n  year = {2014},\n  pages = {596-600},\n  abstract = {In this work we propose a wideband spectrum sensing system based on hybrid filter banks. The polyphase implementation of the digital counterpart of the filter bank can be modified to include a parallelized version of discrete Fourier transform algorithm (FFT) avoiding this way any sampling rate expanders. In this work we show how to incorporate the FFT block in the structure in order to estimate the wideband frequency contents of the signal. The proposed structure is particularly suitable for FPGA based implementations.},\n  keywords = {channel bank filters;cognitive radio;discrete Fourier transforms;signal detection;wideband spectrum sensing system;hybrid filter banks;discrete Fourier transform algorithm;FFT;FPGA based implementations;cognitive radio;Sensors;Cognitive radio;Field programmable gate arrays;Filter banks;Real-time systems;Wideband;Hybrid Filter Banks;FFT;Spectrum Sensing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924297.pdf},\n}\n\n
\n
\n\n\n
\n In this work we propose a wideband spectrum sensing system based on hybrid filter banks. The polyphase implementation of the digital counterpart of the filter bank can be modified to include a parallelized version of discrete Fourier transform algorithm (FFT) avoiding this way any sampling rate expanders. In this work we show how to incorporate the FFT block in the structure in order to estimate the wideband frequency contents of the signal. The proposed structure is particularly suitable for FPGA based implementations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Recent advances in software-defined radars: Chirped impulses.\n \n \n \n \n\n\n \n Muñoz-Ferreras, J. -.; Arnedo, I.; Lujambio, A.; Chudzik, M.; Laso, M. -. G.; Gómez-García, R.; and Madanayake, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 601-605, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RecentPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952179,\n  author = {J. -. Muñoz-Ferreras and I. Arnedo and A. Lujambio and M. Chudzik and M. -. G. Laso and R. Gómez-García and A. Madanayake},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Recent advances in software-defined radars: Chirped impulses},\n  year = {2014},\n  pages = {601-605},\n  abstract = {The software-defined radio (SDR) paradigm can be applied to radars. Novel radio-frequency (RF) chains and architectures can lead to enhanced radar schemes. After a brief review of SDR-based schemes, this work concentrates on the relevant topic of impulse-radio ultra-wideband (IR-UWB) radars. By emitting extremely-narrow impulses in time domain, these systems can achieve a great range resolution. However, one drawback is their difficulty to control their narrow waveform. On the other hand, because of its many advantages, the chirped waveform has been extensively used in radars and has become the standard employed signal. Here, for the first time, the chirped waveform is exploited in SDR-inspired IR-UWB radars, thus bringing together the benefits of both worlds. The key element in this radar architecture is a passive device shaped by smoothly-chirped coupled lines (SCCL) to produce the chirped signal. Through a developed circuit, very-narrow chirped pulses have been generated and measured.},\n  keywords = {chirp modulation;cognitive radio;software radio;ultra wideband communication;ultra wideband radar;smoothly chirped coupled lines;radar architecture;IR-UWB radars;chirped waveform;impulse radio ultrawideband radars;radiofrequency chains;software defined radio;chirped impulses;software defined radars;Chirp;Receivers;Generators;Bandwidth;Time-domain analysis;Doppler radar;Chirped waveform;cognitive radars;impulse-radio ultra-wideband radar;smoothly-chirped coupled lines;software defined radio (SDR)},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569907155.pdf},\n}\n\n
\n
\n\n\n
\n The software-defined radio (SDR) paradigm can be applied to radars. Novel radio-frequency (RF) chains and architectures can lead to enhanced radar schemes. After a brief review of SDR-based schemes, this work concentrates on the relevant topic of impulse-radio ultra-wideband (IR-UWB) radars. By emitting extremely-narrow impulses in time domain, these systems can achieve a great range resolution. However, one drawback is their difficulty to control their narrow waveform. On the other hand, because of its many advantages, the chirped waveform has been extensively used in radars and has become the standard employed signal. Here, for the first time, the chirped waveform is exploited in SDR-inspired IR-UWB radars, thus bringing together the benefits of both worlds. The key element in this radar architecture is a passive device shaped by smoothly-chirped coupled lines (SCCL) to produce the chirped signal. Through a developed circuit, very-narrow chirped pulses have been generated and measured.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Audio concept classification with Hierarchical Deep Neural Networks.\n \n \n \n \n\n\n \n Ravanelli, M.; Elizalde, B.; Ni, K.; and Friedland, G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 606-610, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AudioPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952180,\n  author = {M. Ravanelli and B. Elizalde and K. Ni and G. Friedland},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Audio concept classification with Hierarchical Deep Neural Networks},\n  year = {2014},\n  pages = {606-610},\n  abstract = {Audio-based multimedia retrieval tasks may identify semantic information in audio streams, i.e., audio concepts (such as music, laughter, or a revving engine). Conventional Gaussian-Mixture-Models have had some success in classifying a reduced set of audio concepts. However, multi-class classification can benefit from context window analysis and the discriminating power of deeper architectures. Although deep learning has shown promise in various applications such as speech and object recognition, it has not yet met the expectations for other fields such as audio concept classification. This paper explores, for the first time, the potential of deep learning in classifying audio concepts on User-Generated Content videos. The proposed system is comprised of two cascaded neural networks in a hierarchical configuration to analyze the short- and long-term context information. Our system outperforms a GMM approach by a relative 54%, a Neural Network by 33%, and a Deep Neural Network by 12% on the TRECVID-MED database.},\n  keywords = {audio signal processing;audio streaming;content-based retrieval;Gaussian processes;learning (artificial intelligence);mixture models;multimedia databases;neural nets;signal classification;TRECVID-MED database;context information;cascaded neural networks;user-generated content videos;object recognition;speech recognition;context window analysis;multiclass classification;GMM;Gaussian mixture models;audio streams;semantic information;audio-based multimedia retrieval;hierarchical deep neural networks;audio concept classification;Artificial neural networks;Context;Videos;Neurons;Speech;Speech recognition;deep neural networks;audio concepts classification;TRECVID},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922435.pdf},\n}\n\n
\n
\n\n\n
\n Audio-based multimedia retrieval tasks may identify semantic information in audio streams, i.e., audio concepts (such as music, laughter, or a revving engine). Conventional Gaussian-Mixture-Models have had some success in classifying a reduced set of audio concepts. However, multi-class classification can benefit from context window analysis and the discriminating power of deeper architectures. Although deep learning has shown promise in various applications such as speech and object recognition, it has not yet met the expectations for other fields such as audio concept classification. This paper explores, for the first time, the potential of deep learning in classifying audio concepts on User-Generated Content videos. The proposed system is comprised of two cascaded neural networks in a hierarchical configuration to analyze the short- and long-term context information. Our system outperforms a GMM approach by a relative 54%, a Neural Network by 33%, and a Deep Neural Network by 12% on the TRECVID-MED database.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Unsupervised learning and refinement of rhythmic patterns for beat and downbeat tracking.\n \n \n \n \n\n\n \n Krebs, F.; Korzeniowski, F.; Grachten, M.; and Widmer, G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 611-615, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"UnsupervisedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952181,\n  author = {F. Krebs and F. Korzeniowski and M. Grachten and G. Widmer},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Unsupervised learning and refinement of rhythmic patterns for beat and downbeat tracking},\n  year = {2014},\n  pages = {611-615},\n  abstract = {In this paper, we propose a method of extracting rhythmic patterns from audio recordings to be used for training a probabilistic model for beat and downbeat extraction. The method comprises two stages: clustering and refinement. It is able to take advantage of any available annotations that are related to the metrical structure (e.g., beats, tempo, downbeats, dance style). Our evaluation on the Ballroom dataset showed that our unsupervised method achieves results comparable to those of a supervised model. On another dataset, the proposed method performs as well as one of two reference systems in the beat tracking task, and achieves better results in downbeat tracking.},\n  keywords = {audio recording;hidden Markov models;learning (artificial intelligence);extracting rhythmic patterns;beat tracking;probabilistic model;Ballroom dataset;refinement;clustering;audio recordings;downbeat tracking;unsupervised learning;Hidden Markov models;Viterbi algorithm;Training;Measurement;Computational modeling;Maximum likelihood decoding;Rhythm;Hidden Markov model;Viterbi training;beat tracking;downbeat tracking;clustering},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925187.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a method of extracting rhythmic patterns from audio recordings to be used for training a probabilistic model for beat and downbeat extraction. The method comprises two stages: clustering and refinement. It is able to take advantage of any available annotations that are related to the metrical structure (e.g., beats, tempo, downbeats, dance style). Our evaluation on the Ballroom dataset showed that our unsupervised method achieves results comparable to those of a supervised model. On another dataset, the proposed method performs as well as one of two reference systems in the beat tracking task, and achieves better results in downbeat tracking.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Speech-music discrimination: A deep learning perspective.\n \n \n \n \n\n\n \n Pikrakis, A.; and Theodoridis, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 616-620, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Speech-musicPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952182,\n  author = {A. Pikrakis and S. Theodoridis},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Speech-music discrimination: A deep learning perspective},\n  year = {2014},\n  pages = {616-620},\n  abstract = {This paper is a study of the problem of speech-music discrimination from a deep learning perspective. We experiment with two feature extraction schemes and investigate how network depth and RBM size affect the classification performance on publicly available datasets and on large amounts of audio data from video-sharing sites, without placing restrictions on the recording conditions. The main building block of our deep networks is the Restricted Boltzmann Machine (RBM) with binary, stochastic units. The stack of RBMs is pre-trained in a layer-wise mode and, subsequently, a fine-tuning stage trains the deep network as a whole with back-propagation. The proposed approach indicates that deep architectures can serve as strong classifiers for the broad binary problem of speech vs music, with satisfactory generalization performance.},\n  keywords = {audio signal processing;backpropagation;Boltzmann machines;feature extraction;speech processing;speech-music discrimination;deep learning perspective;feature extraction;restricted Boltzmann machine;RBM;audio data;backpropagation;Speech;Training;Feature extraction;Vectors;Computer architecture;Associative memory;Testing;Deep learning;Speech-Music Discrimination},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925289.pdf},\n}\n\n
\n
\n\n\n
\n This paper is a study of the problem of speech-music discrimination from a deep learning perspective. We experiment with two feature extraction schemes and investigate how network depth and RBM size affect the classification performance on publicly available datasets and on large amounts of audio data from video-sharing sites, without placing restrictions on the recording conditions. The main building block of our deep networks is the Restricted Boltzmann Machine (RBM) with binary, stochastic units. The stack of RBMs is pre-trained in a layer-wise mode and, subsequently, a fine-tuning stage trains the deep network as a whole with back-propagation. The proposed approach indicates that deep architectures can serve as strong classifiers for the broad binary problem of speech vs music, with satisfactory generalization performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Blind audio source separation of stereo mixtures using Bayesian Non-negative Matrix Factorization.\n \n \n \n \n\n\n \n Mirzaei, S.; Van Hamme, H.; and Norouzi, Y.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 621-625, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"BlindPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952183,\n  author = {S. Mirzaei and H. {Van Hamme} and Y. Norouzi},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Blind audio source separation of stereo mixtures using Bayesian Non-negative Matrix Factorization},\n  year = {2014},\n  pages = {621-625},\n  abstract = {In this paper, a novel approach is proposed for estimating the number of sources and for source separation in convolutive audio stereo mixtures. First, an angular spectrum-based method is applied to count and locate the sources. A nonlinear GCC-PHAT metric is exploited for this purpose. The estimated channel coefficients are then utilized to obtain a primary estimate of the source spectrograms through binary masking. Afterwards, the individual spectrograms are decomposed using a Bayesian NMF approach. This way, the number of components required for modeling each source is inferred based on data. These factors are then utilized as initial values for the EM algorithm which maximizes the joint likelihood of the 2-channel data to extract the individual source signals. It is shown that this initialization scheme can greatly improve the performance of the source separation over random initialization. The experiments are performed on synthetic mixtures of speech and music signals.},\n  keywords = {audio signal processing;blind source separation;matrix decomposition;blind audio source separation;Bayesian nonnegative matrix factorization;convolutive audio stereo mixtures;angular spectrum-based method;nonlinear GCC-PHAT metric;channel coefficients;source spectrograms;binary masking;Bayesian NMF;speech signals;music signals;Source separation;Speech;Bayes methods;Spectrogram;Channel estimation;Multiple signal classification;Blind Source Separation (BSS);Bayesian Non-negative Matrix Factorization(NMF);Marginal Maximum Likelihood (MML);Expectation-Maximization (EM)},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926657.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a novel approach is proposed for estimating the number of sources and for source separation in convolutive audio stereo mixtures. First, an angular spectrum-based method is applied to count and locate the sources. A nonlinear GCC-PHAT metric is exploited for this purpose. The estimated channel coefficients are then utilized to obtain a primary estimate of the source spectrograms through binary masking. Afterwards, the individual spectrograms are decomposed using a Bayesian NMF approach. This way, the number of components required for modeling each source is inferred based on data. These factors are then utilized as initial values for the EM algorithm which maximizes the joint likelihood of the 2-channel data to extract the individual source signals. It is shown that this initialization scheme can greatly improve the performance of the source separation over random initialization. The experiments are performed on synthetic mixtures of speech and music signals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Controlling the convergence rate to help parameter estimation in a PLCA-based model.\n \n \n \n \n\n\n \n Fuentes, B.; Badeau, R.; and Richard, G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 626-630, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ControllingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952184,\n  author = {B. Fuentes and R. Badeau and G. Richard},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Controlling the convergence rate to help parameter estimation in a PLCA-based model},\n  year = {2014},\n  pages = {626-630},\n  abstract = {Probabilistic Latent Component Analysis (PLCA) is a tool similar to Non-negative Matrix Factorization (NMF), which is used to model non-negative data such as non-negative time-frequency representations of audio. In this paper, we put forward a trick to help the corresponding parameter estimation algorithm to converge toward more meaningful solutions, based on the new concept of brakes. The idea is to control the convergence rate of the parameters of a PLCA-based model within the estimation algorithm: the parameters which are known to be properly initialized are braked in order to stay close to their initial values, whereas the other ones keep a regular convergence rate. This is an effective way to better account for a relevant initialization. In this paper, these brakes are implemented in the framework of PLCA, and they are tested in an application of multipitch estimation. Results show that the use of brakes can significantly influence the decomposition and thus the performance, making them a powerful tool to boost any kind of PLCA-based algorithm.},\n  keywords = {audio signal processing;matrix decomposition;probability;convergence rate controlling;PLCA-based model;probabilistic latent component analysis;nonnegative matrix factorization;NMF;nonnegative data model;audio nonnegative time-frequency representation;parameter estimation algorithm;estimation algorithm;regular convergence rate;multipitch estimation;Abstracts;Estimation;Shape;PLCA;NMF;EM algorithm;multipitch estimation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924561.pdf},\n}\n\n
\n
\n\n\n
\n Probabilistic Latent Component Analysis (PLCA) is a tool similar to Non-negative Matrix Factorization (NMF), which is used to model non-negative data such as non-negative time-frequency representations of audio. In this paper, we put forward a trick to help the corresponding parameter estimation algorithm to converge toward more meaningful solutions, based on the new concept of brakes. The idea is to control the convergence rate of the parameters of a PLCA-based model within the estimation algorithm: the parameters which are known to be properly initialized are braked in order to stay close to their initial values, whereas the other ones keep a regular convergence rate. This is an effective way to better account for a relevant initialization. In this paper, these brakes are implemented in the framework of PLCA, and they are tested in an application of multipitch estimation. Results show that the use of brakes can significantly influence the decomposition and thus the performance, making them a powerful tool to boost any kind of PLCA-based algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Exploring superframe co-occurrence for acoustic event recognition.\n \n \n \n \n\n\n \n Phan, H.; and Mertins, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 631-635, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ExploringPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952185,\n  author = {H. Phan and A. Mertins},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Exploring superframe co-occurrence for acoustic event recognition},\n  year = {2014},\n  pages = {631-635},\n  abstract = {We introduce in this paper a concept of using acoustic superframes, a mid-level representation which can overcome the drawbacks of both global and simple frame-level representations for acoustic events. Through superframe-level recognition, we explore the phenomenon of superframe co-occurrence across different event categories and propose an efficient classification scheme that takes advantage of this feature sharing to improve the event-wise recognition power. We empirically show that our recognition system results in 2.7% classification error rate on the ITC-Irst database. This state-of-the-art performance demonstrates the efficiency of this proposed approach. Furthermore, we argue that this presentation can pretty much facilitate the event detection task compared to its counterparts, e.g. global and simple frame-level representations.},\n  keywords = {acoustic signal detection;acoustic signal processing;signal classification;signal representation;superframe cooccurrence;acoustic event recognition;midlevel representation;global frame-level representations;simple frame-level representations;superframe-level recognition;classification scheme;event-wise recognition power improvement;feature sharing;classification error rate;ITC-Irst database;event detection task;Acoustics;Histograms;Kernel;Event detection;Testing;Vectors;Databases;Acoustic event recognition;superframe;histogram;co-occurrence},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569911345.pdf},\n}\n\n
\n
\n\n\n
\n We introduce in this paper a concept of using acoustic superframes, a mid-level representation which can overcome the drawbacks of both global and simple frame-level representations for acoustic events. Through superframe-level recognition, we explore the phenomenon of superframe co-occurrence across different event categories and propose an efficient classification scheme that takes advantage of this feature sharing to improve the event-wise recognition power. We empirically show that our recognition system results in 2.7% classification error rate on the ITC-Irst database. This state-of-the-art performance demonstrates the efficiency of this proposed approach. Furthermore, we argue that this presentation can pretty much facilitate the event detection task compared to its counterparts, e.g. global and simple frame-level representations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n ILD preservation in the multichannel wiener filter for binaural hearing aid applications.\n \n \n \n \n\n\n \n Costa, M. H.; and Naylor, P. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 636-640, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ILDPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952186,\n  author = {M. H. Costa and P. A. Naylor},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {ILD preservation in the multichannel wiener filter for binaural hearing aid applications},\n  year = {2014},\n  pages = {636-640},\n  abstract = {This work presents a new method for noise reduction in binaural hearing aid applications that preserves the interaural level difference. A bounded symmetrical approximation of the logarithm is employed to estimate the interaural level difference, resulting in identical values for symmetrical (left/right) frontal angles. It proposes a new cost function to be used in association with the multichannel Wiener filter technique to provide a trade-off between noise reduction and distortion of the localization cues. Simulations of a binaural setup and comparisons with a previously developed technique show that the new method gives a signal to noise ratio improvement of up to 9.6 dB better than the baseline technique, for the same maximum-tolerated binaural-cue distortion.},\n  keywords = {approximation theory;distortion;hearing aids;Wiener filters;maximum-tolerated binaural-cue distortion;localization cue distortion;symmetrical frontal angles;bounded symmetrical approximation;interaural level difference;noise reduction;multichannel Wiener filter;ILD preservation;binaural hearing aid applications;Approximation methods;Speech;Noise;Cost function;Wiener filters;Hearing aids;Hearing-aids;noise reduction;binaural;wiener filter;speech processing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569912187.pdf},\n}\n\n
\n
\n\n\n
\n This work presents a new method for noise reduction in binaural hearing aid applications that preserves the interaural level difference. A bounded symmetrical approximation of the logarithm is employed to estimate the interaural level difference, resulting in identical values for symmetrical (left/right) frontal angles. It proposes a new cost function to be used in association with the multichannel Wiener filter technique to provide a trade-off between noise reduction and distortion of the localization cues. Simulations of a binaural setup and comparisons with a previously developed technique show that the new method gives a signal to noise ratio improvement of up to 9.6 dB better than the baseline technique, for the same maximum-tolerated binaural-cue distortion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Merging extremum seeking and self-optimizing narrowband interference canceller - overdetermined case.\n \n \n \n \n\n\n \n Meller, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 641-645, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"MergingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952187,\n  author = {M. Meller},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Merging extremum seeking and self-optimizing narrowband interference canceller - overdetermined case},\n  year = {2014},\n  pages = {641-645},\n  abstract = {Active cancellation systems rely on destructive interference to achieve rejection of unwanted disturbances entering the system of interest. Typical practical applications of this method employ a simple single input, single output arrangement. However, when a spatial wavefield (e.g. acoustic noise or vibration) needs to be controlled, multichannel active cancellation systems arise naturally. Among these, the so-called overdetermined control configuration, which employs more measurement outputs than control inputs, is often found to provide superior performance. The paper proposes an extension of the recently introduced control scheme, called self-optimizing narrowband interference canceller (SONIC), to the overdetermined case. The extension employs a novel variant of the extremum-seeking adaptation loop which uses random, rather than sinusoidal, probing signals. This modification simplifies design of the controller and improves its convergence. Simulations, performed using a realistic model of the plant, demonstrate improved properties of the new controller.},\n  keywords = {interference suppression;extremum seeking adaptation loop merging;self-optimizing narrowband interference canceller;destructive interference;unwanted disturbance rejection;single input single output arrangement;spatial wavefield;multichannel active cancellation systems;overdetermined control configuration;measurement outputs;control inputs;SONIC;sinusoidal probing signals;Narrowband;Interference;Noise;Cost function;Robustness;Control systems;Vectors;extremum seeking;disturbance rejection;adaptive control;active noise control},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917515.pdf},\n}\n\n
\n
\n\n\n
\n Active cancellation systems rely on destructive interference to achieve rejection of unwanted disturbances entering the system of interest. Typical practical applications of this method employ a simple single input, single output arrangement. However, when a spatial wavefield (e.g. acoustic noise or vibration) needs to be controlled, multichannel active cancellation systems arise naturally. Among these, the so-called overdetermined control configuration, which employs more measurement outputs than control inputs, is often found to provide superior performance. The paper proposes an extension of the recently introduced control scheme, called self-optimizing narrowband interference canceller (SONIC), to the overdetermined case. The extension employs a novel variant of the extremum-seeking adaptation loop which uses random, rather than sinusoidal, probing signals. This modification simplifies design of the controller and improves its convergence. Simulations, performed using a realistic model of the plant, demonstrate improved properties of the new controller.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A psychoacoustic model with Partial Spectral Flatness Measure for tonality estimation.\n \n \n \n \n\n\n \n Taghipour, A.; Jaikumar, M. C.; and Edler, B.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 646-650, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952188,\n  author = {A. Taghipour and M. C. Jaikumar and B. Edler},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A psychoacoustic model with Partial Spectral Flatness Measure for tonality estimation},\n  year = {2014},\n  pages = {646-650},\n  abstract = {Psychoacoustic studies show that the strength of masking is, among others, dependent on the tonality of the masker: the effect of noise maskers is stronger than that of tone maskers. Recently, a Partial Spectral Flatness Measure (PSFM) was introduced for tonality estimation in a psychoacoustic model for perceptual audio coding. The model consists of an Infinite Impulse Response (IIR) filterbank which considers the spreading effect of individual local maskers in simultaneous masking. An optimized (with respect to audio quality and computational efficiency) PSFM is now compared to a similar psychoacoustic model with prediction based tonality estimation in medium (48 kbit/s) and low (32 kbit/s) bit rate conditions (mono) via subjective quality tests. 15 expert listeners participated in the subjective tests. The results are depicted and discussed. Additionally, we conducted the subjective tests with 15 non-expert consumers whose results are also shown and compared to those of the experts.},\n  keywords = {acoustic signal processing;audio coding;channel bank filters;estimation theory;hearing;IIR filters;IIR filterbank;infinite impulse response filterbank;perceptual audio coding;tone maskers;noise maskers;masking strength;tonality estimation;PSFM;partial spectral flatness measure;psychoacoustic model;bit rate 48 kbit/s;bit rate 32 kbit/s;Abstracts;Phase change materials;Transforms;Noise;Silicon;Perceptual Model;Psychoacoustic Model;Perceptual Audio Coding;Spectral Flatness;Tonality Estimation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569918015.pdf},\n}\n\n
\n
\n\n\n
\n Psychoacoustic studies show that the strength of masking is, among others, dependent on the tonality of the masker: the effect of noise maskers is stronger than that of tone maskers. Recently, a Partial Spectral Flatness Measure (PSFM) was introduced for tonality estimation in a psychoacoustic model for perceptual audio coding. The model consists of an Infinite Impulse Response (IIR) filterbank which considers the spreading effect of individual local maskers in simultaneous masking. An optimized (with respect to audio quality and computational efficiency) PSFM is now compared to a similar psychoacoustic model with prediction based tonality estimation in medium (48 kbit/s) and low (32 kbit/s) bit rate conditions (mono) via subjective quality tests. 15 expert listeners participated in the subjective tests. The results are depicted and discussed. Additionally, we conducted the subjective tests with 15 non-expert consumers whose results are also shown and compared to those of the experts.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Novel decorrelation approach for an advanced multichannel acoustic echo cancellation system.\n \n \n \n\n\n \n Romoli, L.; Cecchi, S.; Comminiello, D.; Piazza, F.; and Uncini, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 651-655, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952189,\n  author = {L. Romoli and S. Cecchi and D. Comminiello and F. Piazza and A. Uncini},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Novel decorrelation approach for an advanced multichannel acoustic echo cancellation system},\n  year = {2014},\n  pages = {651-655},\n  abstract = {A multichannel sound reproduction system aims at offering an immersive experience exploiting multiple microphones and loudspeakers. In the case of multichannel acoustic echo cancellation, a suitable solutions for overcoming the well-known non-uniqueness problem and an appropriate choice of the adaptive algorithm become essential to improve the audio reproduction quality. In this paper, an advanced system is proposed based on the introduction of a multichannel decorrelation solution exploiting the missing-fundamental phenomenon and a combined multiple-input multiple-output architecture updated by using the multichannel affine projection algorithm. Experimental results proved the effectiveness of the presented framework in terms of objective and subjective measures, providing a suitable solution for echo cancellation.},\n  keywords = {echo;echo suppression;MIMO communication;decorrelation approach;advanced multichannel acoustic echo cancellation system;multichannel sound reproduction system;microphones;loudspeakers;multichannel acoustic echo cancellation;adaptive algorithm;advanced system;multichannel decorrelation solution;missing-fundamental phenomenon;multiple-input multiple-output architecture;multichannel affine projection algorithm;echo cancellation;Decorrelation;MIMO;Echo cancellers;Microphones;Loudspeakers;Speech;Multichannel Acoustic Echo Cancellation;Channel Decorrelation;Adaptive Combination of Filters},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n A multichannel sound reproduction system aims at offering an immersive experience exploiting multiple microphones and loudspeakers. In the case of multichannel acoustic echo cancellation, a suitable solutions for overcoming the well-known non-uniqueness problem and an appropriate choice of the adaptive algorithm become essential to improve the audio reproduction quality. In this paper, an advanced system is proposed based on the introduction of a multichannel decorrelation solution exploiting the missing-fundamental phenomenon and a combined multiple-input multiple-output architecture updated by using the multichannel affine projection algorithm. Experimental results proved the effectiveness of the presented framework in terms of objective and subjective measures, providing a suitable solution for echo cancellation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A novel method for selecting the number of clusters in a speaker diarization system.\n \n \n \n \n\n\n \n Lopez-Otero, P.; Docio-Fernandez, L.; and Garcia-Mateo, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 656-660, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952190,\n  author = {P. Lopez-Otero and L. Docio-Fernandez and C. Garcia-Mateo},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A novel method for selecting the number of clusters in a speaker diarization system},\n  year = {2014},\n  pages = {656-660},\n  abstract = {This paper introduces the cluster score (C-score) as a measure for determining a suitable number of clusters when performing speaker clustering in a speaker diarization system. C-score finds a trade-off between intra-cluster and extra-cluster similarities, selecting a number of clusters with cluster elements that are similar between them but different to the elements in other clusters. Speech utterances are represented by Gaussian mixture model mean supervectors, and also the projection of the supervectors into a low-dimensional discriminative subspace by linear discriminant analysis is assessed. This technique shows robustness to segmentation errors and, compared with the widely used Bayesian information criterion (BIC)-based stopping criterion, results in a lower speaker clustering error and dramatically reduces computation time. Experiments were run using the broadcast news database used for the Albayzin 2010 Speaker Diarization Evaluation.},\n  keywords = {Gaussian processes;speaker recognition;statistical analysis;speaker diarization system;cluster score;speaker clustering;C-score;Gaussian mixture model mean supervectors;low-dimensional discriminative subspace;linear discriminant analysis;speech utterances;inter-cluster similarity;intra-cluster similarity;Speech;Clustering algorithms;Robustness;Databases;Vectors;Speech processing;Feature extraction;Speaker Clustering;Cluster Similarity;Linear Discriminant Analysis},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569919351.pdf},\n}\n\n
\n
\n\n\n
\n This paper introduces the cluster score (C-score) as a measure for determining a suitable number of clusters when performing speaker clustering in a speaker diarization system. C-score finds a trade-off between intra-cluster and extra-cluster similarities, selecting a number of clusters with cluster elements that are similar between them but different to the elements in other clusters. Speech utterances are represented by Gaussian mixture model mean supervectors, and also the projection of the supervectors into a low-dimensional discriminative subspace by linear discriminant analysis is assessed. This technique shows robustness to segmentation errors and, compared with the widely used Bayesian information criterion (BIC)-based stopping criterion, results in a lower speaker clustering error and dramatically reduces computation time. Experiments were run using the broadcast news database used for the Albayzin 2010 Speaker Diarization Evaluation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Monitoring sleep with 40-Hz ASSR.\n \n \n \n \n\n\n \n Haghigih, S. J.; and Hatzinakos, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 661-665, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"MonitoringPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952191,\n  author = {S. J. Haghigih and D. Hatzinakos},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Monitoring sleep with 40-Hz ASSR},\n  year = {2014},\n  pages = {661-665},\n  abstract = {The 40-Hz auditory steady state response (ASSR) signals recorded from human subjects during sleep and wakefulness are investigated in this study for the purpose of monitoring sleep. The ASSR signals extracted from stimulated electro encephalogram (EEG), explored in search for differentiating and robust to noise features. Choosing appropriate features in time and frequency domain, the performance of linear and quadratic discriminant analysis in classifying signals in different scenarios are studied. While the developed method itself is novel in sleep monitoring, due to similarities between N3 stage of sleep and anesthesia, the method will pave the way for later analysis on monitoring consciousness with 40-Hz ASSR. The 40-Hz ASSR extraction and noise cancellation methods presented in this paper can also be used for extracting 40-Hz ASSR from its background EEG signal in general.},\n  keywords = {electroencephalography;frequency-domain analysis;medical signal processing;signal classification;time-domain analysis;auditory steady state response signals;ASSR signal extraction;stimulated electro encephalogram;EEG;frequency domain;time domain;quadratic discriminant analysis;linear discriminant analysis;sleep monitoring;signal classification;noise cancellation methods;Sleep;Electroencephalography;Error analysis;Feature extraction;Steady-state;Anesthesia;Electrodes},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921695.pdf},\n}\n\n
\n
\n\n\n
\n The 40-Hz auditory steady state response (ASSR) signals recorded from human subjects during sleep and wakefulness are investigated in this study for the purpose of monitoring sleep. The ASSR signals extracted from stimulated electro encephalogram (EEG), explored in search for differentiating and robust to noise features. Choosing appropriate features in time and frequency domain, the performance of linear and quadratic discriminant analysis in classifying signals in different scenarios are studied. While the developed method itself is novel in sleep monitoring, due to similarities between N3 stage of sleep and anesthesia, the method will pave the way for later analysis on monitoring consciousness with 40-Hz ASSR. The 40-Hz ASSR extraction and noise cancellation methods presented in this paper can also be used for extracting 40-Hz ASSR from its background EEG signal in general.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A broadband beamformer using controllable constraints and minimum variance.\n \n \n \n \n\n\n \n Karimian-Azari, S.; Benesty, J.; Jensen, J. R.; and Christensen, M. G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 666-670, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952192,\n  author = {S. Karimian-Azari and J. Benesty and J. R. Jensen and M. G. Christensen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A broadband beamformer using controllable constraints and minimum variance},\n  year = {2014},\n  pages = {666-670},\n  abstract = {The minimum variance distortionless response (MVDR) and the linearly constrained minimum variance (LCMV) beamformers are two optimal approaches in the sense of noise reduction. The LCMV beamformer can also reject interferers using linear constraints at the expense of reducing the degree of freedom in a limited number of microphones. However, it may magnify noise that causes a lower output signal-to-noise ratio (SNR) than the MVDR beamformer. Contrarily, the MVDR beamformer suffers from interference in output. In this paper, we propose a controllable LCMV (C-LCMV) beamformer based on the principles of both the MVDR and LCMV beamformers. The C-LCMV approach can control a compromise between noise reduction and interference rejection. Simulation results show that the C-LCMV beamformer outperforms the MVDR beamformer in interference rejection, and the LCMV beamformer in background noise reduction.},\n  keywords = {array signal processing;microphone arrays;speech processing;broadband beamformer;controllable constraints;minimum variance distortionless response;linearly constrained minimum variance;LCMV beamformers;noise reduction;linear constraints;microphones;signal-to-noise ratio;SNR;MVDR beamformer;interference rejection;speech processing applications;microphone arrays;Interference;Microphones;Signal to noise ratio;Speech;Array signal processing;Noise measurement;Microphone arrays;frequency-domain beamforming;MVDR;LCMV;controllable beamformer},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924165.pdf},\n}\n\n
\n
\n\n\n
\n The minimum variance distortionless response (MVDR) and the linearly constrained minimum variance (LCMV) beamformers are two optimal approaches in the sense of noise reduction. The LCMV beamformer can also reject interferers using linear constraints at the expense of reducing the degree of freedom in a limited number of microphones. However, it may magnify noise that causes a lower output signal-to-noise ratio (SNR) than the MVDR beamformer. Contrarily, the MVDR beamformer suffers from interference in output. In this paper, we propose a controllable LCMV (C-LCMV) beamformer based on the principles of both the MVDR and LCMV beamformers. The C-LCMV approach can control a compromise between noise reduction and interference rejection. Simulation results show that the C-LCMV beamformer outperforms the MVDR beamformer in interference rejection, and the LCMV beamformer in background noise reduction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive waveforms for flow velocity estimation using acoustic signals.\n \n \n \n \n\n\n \n Candel, I.; Digulescu, A.; Ioana, C.; and Vasile, G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 671-675, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952213,\n  author = {I. Candel and A. Digulescu and C. Ioana and G. Vasile},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive waveforms for flow velocity estimation using acoustic signals},\n  year = {2014},\n  pages = {671-675},\n  abstract = {In this paper, we introduce a general framework for waveform design and signal processing, dedicated to the study of turbulent flow phenomena. In a bi-static configuration, by transmitting a specific waveform with a predefined instantaneous frequency law (IFL), within the bounds of the Kolmogorov spectrum, the turbulent media will modify the IFL at the receiving side. We propose a new methodology to estimate this change and to exploit it for velocity estimation using acoustic signals. In this way, the amplitude based velocity estimation techniques can be substituted by non-stationary time - frequency signal processing. This technique proves to be more robust in terms of interferences and can provide a more detailed representation of any turbulent environment.},\n  keywords = {acoustic signal processing;time-frequency analysis;turbulence;adaptive waveform design;acoustic signal processing;bi-static configuration;predefined instantaneous frequency law;IFL;Kolmogorov spectrum;amplitude based velocity estimation techniques;nonstationary time-frequency signal processing;Acoustics;Time-frequency analysis;Estimation;Acoustic transducers;Shape;Frequency modulation;Adaptive waveforms;turbulence;Kolmogorov spectrum;instantaneous frequency law;wide band signals},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924641.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we introduce a general framework for waveform design and signal processing, dedicated to the study of turbulent flow phenomena. In a bi-static configuration, by transmitting a specific waveform with a predefined instantaneous frequency law (IFL), within the bounds of the Kolmogorov spectrum, the turbulent media will modify the IFL at the receiving side. We propose a new methodology to estimate this change and to exploit it for velocity estimation using acoustic signals. In this way, the amplitude based velocity estimation techniques can be substituted by non-stationary time - frequency signal processing. This technique proves to be more robust in terms of interferences and can provide a more detailed representation of any turbulent environment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Algorithms and evaluation on blind estimation of reverberation time.\n \n \n \n \n\n\n \n Adrian, J.; and Bitzer, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 676-680, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AlgorithmsPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952214,\n  author = {J. Adrian and J. Bitzer},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Algorithms and evaluation on blind estimation of reverberation time},\n  year = {2014},\n  pages = {676-680},\n  abstract = {In this contribution, we propose an algorithm to analyze early and late reverberation in monaural recordings in an offline processing framework with emphasis on live recordings. This algorithm is evaluated against known state-of-the-art solutions. Our baseline method uses cepstral mean along signal blocks to acquire an estimation of the reverberation's impulse response which is analyzed with respect to its decay characteristics. Further improvements are a cepstral lifter to increase the method's performance by removing nonrelevant cepstral coefficients and a polynomial of second order to map the results onto final estimates. Results indicate larger deviations in the estimated decay times of late reverberations, while estimates for the early decay times are within the just noticable difference (JND) and deviate only slightly from the true values. State-of-the-art algorithms show small correlation with the true reverberation times.},\n  keywords = {cepstral analysis;reverberation;signal processing;blind estimation;reverberation time;offline processing framework;live recordings;reverberation impulse response;decay characteristics;cepstral lifter;nonrelevant cepstral coefficients;just noticable difference;Reverberation;Speech;Estimation;Cepstrum;Correlation;Multiple signal classification;Blind estimation;reverberation time;cepstral analysis},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924725.pdf},\n}\n\n
\n
\n\n\n
\n In this contribution, we propose an algorithm to analyze early and late reverberation in monaural recordings in an offline processing framework with emphasis on live recordings. This algorithm is evaluated against known state-of-the-art solutions. Our baseline method uses cepstral mean along signal blocks to acquire an estimation of the reverberation's impulse response which is analyzed with respect to its decay characteristics. Further improvements are a cepstral lifter to increase the method's performance by removing nonrelevant cepstral coefficients and a polynomial of second order to map the results onto final estimates. Results indicate larger deviations in the estimated decay times of late reverberations, while estimates for the early decay times are within the just noticable difference (JND) and deviate only slightly from the true values. State-of-the-art algorithms show small correlation with the true reverberation times.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A restricted impact noise suppressor in zero phase domain.\n \n \n \n \n\n\n \n Kawamura, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 681-685, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952215,\n  author = {A. Kawamura},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A restricted impact noise suppressor in zero phase domain},\n  year = {2014},\n  pages = {681-685},\n  abstract = {This paper proposes an impact noise suppression method in zero phase (ZP) domain. The signal in ZP domain (ZP signal) is obtained by taking IDFT of the pth power of a spectral amplitude. We previously proposed an impact noise suppressor in ZP domain for reducing impact noise signals, even if they are accompanied with damped oscillation. Unfortunately, the previous method causes speech degradation in non-impact noise segments, because this method performs noise reduction in all the segments. Since an impact noise exists only a short duration, we restrict the noise suppression procedure so that it cannot be applied to the non-impact noise segments. The restriction is achieved by using the ratio of the first to the second peak values of the ZP signal. In non-impact noise segments, this ratio becomes much larger than one. Thus, we can improve speech quality of the extracted signal when the restriction works well. Simulation results show that the proposed method improves about 15dB of SNR for a speech signal mixed with clap noise with SNR= 0dB.},\n  keywords = {discrete Fourier transforms;inverse transforms;signal denoising;speech processing;restricted impact noise suppressor;zero phase domain;ZP signal;IDFT;spectral amplitude;damped oscillation;non-impact noise segments;noise suppression procedure;speech quality improvement;speech signal;clap noise;Speech;Oscillators;Signal to noise ratio;Speech processing;Noise reduction;Indexes;Zero Phase Signal;Speech Enhancement;Noise Suppression;Impact Noise;Damped Oscillation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925127.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes an impact noise suppression method in zero phase (ZP) domain. The signal in ZP domain (ZP signal) is obtained by taking IDFT of the pth power of a spectral amplitude. We previously proposed an impact noise suppressor in ZP domain for reducing impact noise signals, even if they are accompanied with damped oscillation. Unfortunately, the previous method causes speech degradation in non-impact noise segments, because this method performs noise reduction in all the segments. Since an impact noise exists only a short duration, we restrict the noise suppression procedure so that it cannot be applied to the non-impact noise segments. The restriction is achieved by using the ratio of the first to the second peak values of the ZP signal. In non-impact noise segments, this ratio becomes much larger than one. Thus, we can improve speech quality of the extracted signal when the restriction works well. Simulation results show that the proposed method improves about 15dB of SNR for a speech signal mixed with clap noise with SNR= 0dB.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A multi-channel postfilter based on the diffuse noise sound field.\n \n \n \n \n\n\n \n Pfeifenberger, L.; and Pernkopf, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 686-690, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952216,\n  author = {L. Pfeifenberger and F. Pernkopf},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A multi-channel postfilter based on the diffuse noise sound field},\n  year = {2014},\n  pages = {686-690},\n  abstract = {In this paper, we present a multi-channel Directional-to-Diffuse Postfilter (DD-PF), relying on the assumption of a directional speech signal embedded in diffuse noise. Our postfilter uses the output of a superdirective beamformer like the Generalized Sidelobe Canceller (GSC), which is projected back to the microphone inputs to separate the sound field into its directional and diffuse components. From these components the SNR at the output of the beamformer can be derived without needing a Voice Activity Detector (VAD). The SNR is used to construct a noise cancelling Wiener filter. In our experiments, the developed algorithm outperforms two recent postfilters based on the Transient Beam to Reference Ratio (TBRR) and the Multi-Channel Speech Presence Probability (MCSSP).},\n  keywords = {array signal processing;microphones;speech processing;Wiener filters;multichannel postfilter;diffuse noise sound field;multichannel directional-to-diffuse postfilter;DD-PF;directional speech signal;superdirective beamformer;generalized sidelobe canceller;GSC;microphone inputs;noise cancelling Wiener filter;transient beam to reference ratio;TBRR;multichannel speech presence probability;MCSSP;Speech;Microphones;Arrays;Signal to noise ratio;Speech enhancement;beamforming;multi-channel postfilter;diffuse sound field},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925503.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present a multi-channel Directional-to-Diffuse Postfilter (DD-PF), relying on the assumption of a directional speech signal embedded in diffuse noise. Our postfilter uses the output of a superdirective beamformer like the Generalized Sidelobe Canceller (GSC), which is projected back to the microphone inputs to separate the sound field into its directional and diffuse components. From these components the SNR at the output of the beamformer can be derived without needing a Voice Activity Detector (VAD). The SNR is used to construct a noise cancelling Wiener filter. In our experiments, the developed algorithm outperforms two recent postfilters based on the Transient Beam to Reference Ratio (TBRR) and the Multi-Channel Speech Presence Probability (MCSSP).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n LMS algorithmic variants in active noise and vibration control.\n \n \n \n \n\n\n \n Rupp, M.; and Hausberg, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 691-695, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"LMSPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952217,\n  author = {M. Rupp and F. Hausberg},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {LMS algorithmic variants in active noise and vibration control},\n  year = {2014},\n  pages = {691-695},\n  abstract = {In this article we provide analyses of two low complexity LMS algorithmic variants as they typically appear in the context of FXLMS for active noise or vibration control in which the reference signal is not obtained by sensors but internally generated by the known engine speed. In particular we show that the algorithm with real valued error is robust and exhibits the same steady state quality as the original complex-valued LMS algorithm but at the expense of only achieving half the learning speed while its counterpart with real-valued regression vector behaves only equivalently in the statistical sense.},\n  keywords = {active noise control;least mean squares methods;regression analysis;vibration control;vibration control;reference signal;sensors;engine speed;real valued error;steady state quality;original complex-valued LMS algorithm;learning speed;real-valued regression vector;mean-square-convergence;active noise suppression;Algorithm design and analysis;Vectors;Least squares approximations;Robustness;Engines;Signal processing algorithms;Noise;FXLMS algorithm;error bounds;l2-stability;robustness;mean-square-convergence},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569910133.pdf},\n}\n\n
\n
\n\n\n
\n In this article we provide analyses of two low complexity LMS algorithmic variants as they typically appear in the context of FXLMS for active noise or vibration control in which the reference signal is not obtained by sensors but internally generated by the known engine speed. In particular we show that the algorithm with real valued error is robust and exhibits the same steady state quality as the original complex-valued LMS algorithm but at the expense of only achieving half the learning speed while its counterpart with real-valued regression vector behaves only equivalently in the statistical sense.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Asr systems in noisy environment: Auditory features based on gammachirp filter using the AURORA database.\n \n \n \n\n\n \n Rahali, H.; Hajaiej, Z.; and Ellouze, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 696-700, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952218,\n  author = {H. Rahali and Z. Hajaiej and N. Ellouze},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Asr systems in noisy environment: Auditory features based on gammachirp filter using the AURORA database},\n  year = {2014},\n  pages = {696-700},\n  abstract = {This paper deals with the analysis of Automatic Speech Recognition (ASR) suitable for usage within noisy environment in various conditions. Recent research has shown that auditory features based on gammachirp filterbank (GF) are promising to improve robustness of ASR systems against noise. The behavior of parameterization techniques was analyzed from the viewpoint of robustness against noise. It was done for Mel Frequency Cepstral Coefficients (MFCC), Perceptual Linear Prediction (PLP), Gammachirp Filterbank Cepstral Coefficient (GFCC) and Gammachirp Filterbank Perceptual Linear Prediction (GF-PLP). GFCC features have shown best recognition efficiency for clean as well as for noisy database. GFCC and GF-PLP features are calculated using Matlab and saved in HTK format. Training and testing for speech recognition is done using HTK. The above-mentioned techniques were tested with impulsive signals within AURORA databases.},\n  keywords = {cepstral analysis;channel bank filters;speech recognition;ASR systems;gammachirp filters;MFCC;GF-PLP;GFCC;gammachirp filterbank perceptual linear prediction;gammachirp filterbank cepstral coefficient;mel frequency cepstral coefficients;parameterization techniques;automatic speech recognition;AURORA database;auditory features;noisy environment;Filter banks;Mel frequency cepstral coefficient;Speech;Noise;Speech recognition;Feature extraction;Gammachirp filter;Fourier transforms FFT;impulsive noise;MFCC;PLP},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n This paper deals with the analysis of Automatic Speech Recognition (ASR) suitable for usage within noisy environment in various conditions. Recent research has shown that auditory features based on gammachirp filterbank (GF) are promising to improve robustness of ASR systems against noise. The behavior of parameterization techniques was analyzed from the viewpoint of robustness against noise. It was done for Mel Frequency Cepstral Coefficients (MFCC), Perceptual Linear Prediction (PLP), Gammachirp Filterbank Cepstral Coefficient (GFCC) and Gammachirp Filterbank Perceptual Linear Prediction (GF-PLP). GFCC features have shown best recognition efficiency for clean as well as for noisy database. GFCC and GF-PLP features are calculated using Matlab and saved in HTK format. Training and testing for speech recognition is done using HTK. The above-mentioned techniques were tested with impulsive signals within AURORA databases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fir band-pass digital differentiators with flat passband and equiripple stopband characteristics.\n \n \n \n \n\n\n \n Yoshida, T.; Sugiura, Y.; and Aikawa, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 701-705, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"FirPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952219,\n  author = {T. Yoshida and Y. Sugiura and N. Aikawa},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Fir band-pass digital differentiators with flat passband and equiripple stopband characteristics},\n  year = {2014},\n  pages = {701-705},\n  abstract = {Maximally flat digital differentiators are widely used as narrow-band digital differentiators because of their high accuracy around their center frequency of flat property. To obtain highly accurate differentiation over narrow-band, it is important to avoid the undesirable amplification of noise. In this paper, we introduce a design method of linear phase FIR band-pass differentiators with flat passband and equiripple stopband characteristics. The center frequency at the passband of the designed differentiators can be adjusted arbitrarily. Moreover, the proposed transfer function consists of two functions, i.e. the passband function and the stopband one. The weighting coefficients of the passband function are derived using a closed-form formula based on Jacobi Polynomial. The weighting coefficients of the stopband function are achieved using Remez algorithm.},\n  keywords = {band-pass filters;band-stop filters;equiripple filters;FIR filters;Jacobian matrices;transfer functions;FIR band pass digital differentiators;flat passband;equiripple stopband characteristics;narrow band digital differentiators;linear phase FIR band pass differentiators;transfer function;closed form formula;Jacobi polynomial;Remez algorithm;Passband;Band-pass filters;Finite impulse response filters;Attenuation;Design methodology;Frequency response;Bandwidth;Digital differentiators;maximally flat;Remez algorithm;closed-form;Jacobi polynomial},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917745.pdf},\n}\n\n
\n
\n\n\n
\n Maximally flat digital differentiators are widely used as narrow-band digital differentiators because of their high accuracy around their center frequency of flat property. To obtain highly accurate differentiation over narrow-band, it is important to avoid the undesirable amplification of noise. In this paper, we introduce a design method of linear phase FIR band-pass differentiators with flat passband and equiripple stopband characteristics. The center frequency at the passband of the designed differentiators can be adjusted arbitrarily. Moreover, the proposed transfer function consists of two functions, i.e. the passband function and the stopband one. The weighting coefficients of the passband function are derived using a closed-form formula based on Jacobi Polynomial. The weighting coefficients of the stopband function are achieved using Remez algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Conjugate symmetric sequency ordered Walsh Fourier transform.\n \n \n \n \n\n\n \n Pei, S.; and Wen, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 706-710, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ConjugatePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952220,\n  author = {S. Pei and C. Wen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Conjugate symmetric sequency ordered Walsh Fourier transform},\n  year = {2014},\n  pages = {706-710},\n  abstract = {A new family of transforms, which is called the conjugate symmetric sequency-ordered generalized Walsh-Fourier transform (CS-SGWFT), is proposed in this paper. The CS-SGWFT generalized the existing transforms including the conjugate symmetric sequency ordered complex Hadamard transform (CS-SCHT) and the discrete Fourier transform (DFT) as the special cases of the CS-SGWFT. Like the CS-SCHT and the DFT, the spectrums of the CS-SGWFT for real input signals are conjugate symmetric so that we need only half memory to store the transform results. The properties of the CS-SGWFT are similar to those of the CS-SCHT and DFT, including orthogonality, sequency ordering, and conjugate symmetric. Meanwhile, the proposed CS-SGWFT has radix-2 fast algorithm. Finally, applications of the CS-SGWFT for image noise removal and spectrum estimation are proposed.},\n  keywords = {discrete Fourier transforms;Hadamard transforms;image denoising;signal sampling;spectrum estimation;image noise removal;radix-2 fast algorithm;sequency ordering;DFT;discrete Fourier transform;CS-SCHT;conjugate symmetric sequency ordered complex Hadamard transform;CS-SGWFT;conjugate symmetric sequency-ordered generalized Walsh-Fourier transform;Discrete Fourier transforms;Symmetric matrices;Spectral analysis;Noise;Interference;Hadamard transform;Walsh transform;Discrete Fourier transform;Sequency ordered;Conjugate symmetric},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569918167.pdf},\n}\n\n
\n
\n\n\n
\n A new family of transforms, which is called the conjugate symmetric sequency-ordered generalized Walsh-Fourier transform (CS-SGWFT), is proposed in this paper. The CS-SGWFT generalized the existing transforms including the conjugate symmetric sequency ordered complex Hadamard transform (CS-SCHT) and the discrete Fourier transform (DFT) as the special cases of the CS-SGWFT. Like the CS-SCHT and the DFT, the spectrums of the CS-SGWFT for real input signals are conjugate symmetric so that we need only half memory to store the transform results. The properties of the CS-SGWFT are similar to those of the CS-SCHT and DFT, including orthogonality, sequency ordering, and conjugate symmetric. Meanwhile, the proposed CS-SGWFT has radix-2 fast algorithm. Finally, applications of the CS-SGWFT for image noise removal and spectrum estimation are proposed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Switching extensible FIR filter bank for adaptive horizon size in FIR filtering.\n \n \n \n \n\n\n \n Pak, J. M.; Ki Ahn, C.; Lim, M. T.; and Shmali, Y. S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 711-715, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SwitchingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952221,\n  author = {J. M. Pak and C. {Ki Ahn} and M. T. Lim and Y. S. Shmali},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Switching extensible FIR filter bank for adaptive horizon size in FIR filtering},\n  year = {2014},\n  pages = {711-715},\n  abstract = {Horizon size is an important parameter that influences estimation performance of finite impulse response (FIR) filter. In this paper, we propose a novel method called switching extensible FIR filter bank (SEFFB) to adapt horizon size based on maximum likelihood strategy. We verify that the SEFFB achieves a significant performance improvement compared with an ordinary FIR filter which uses a fixed horizon size.},\n  keywords = {FIR filters;maximum likelihood estimation;adaptive horizon size;finite impulse response filter;switching extensible FIR filter bank;SEFFB;maximum likelihood strategy;Finite impulse response filters;Switches;Lungs;Abstracts;Size measurement;Adaptation models;switching extensible FIR filter bank (SEFFB);FIR filter;state estimation;horizon size},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569918395.pdf},\n}\n\n
\n
\n\n\n
\n Horizon size is an important parameter that influences estimation performance of finite impulse response (FIR) filter. In this paper, we propose a novel method called switching extensible FIR filter bank (SEFFB) to adapt horizon size based on maximum likelihood strategy. We verify that the SEFFB achieves a significant performance improvement compared with an ordinary FIR filter which uses a fixed horizon size.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compressed sensing under strong noise. Application to imaging through multiply scattering media.\n \n \n \n \n\n\n \n Liutkus, A.; Martina, D.; Gigan, S.; and Daudet, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 716-720, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CompressedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952222,\n  author = {A. Liutkus and D. Martina and S. Gigan and L. Daudet},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Compressed sensing under strong noise. Application to imaging through multiply scattering media},\n  year = {2014},\n  pages = {716-720},\n  abstract = {Compressive sensing exploits the structure of signals to acquire them with fewer measurements than required by the Nyquist-Shannon theory. However, the design of practical compressive sensing hardware raises several issues. First, one has to elicit a measurement mechanism that exhibits adequate incoherence properties. Second, the system should be robust to noise, whether it be measurement noise, or calibration noise, i.e. discrepancies between theoretical and actual measurement matrices. Third, to improve performance in the case of strong noise, it is not clear whether one should increase the number of sensors, or rather take several measurements, thus settling in the multiple measurement vector scenario (MMV). Here, we first show how measurement matrices may be estimated by calibration instead of being assumed perfectly known, and second that if the noise level reaches a few percents of the signal level, MMV is the only way to sample sparse signals at sub-Nyquist sampling rates.},\n  keywords = {compressed sensing;electromagnetic wave scattering;image sampling;scattering media;signal structure;Nyquist-Shannon theory;compressive sensing hardware;incoherence properties;measurement noise;calibration noise;theoretical measurement matrix;actual measurement matrix;multiple-measurement vector scenario;MMV scenario;noise level;signal level;sparse signal sampling;subNyquist sampling rate;Noise;Sensors;Noise measurement;Image reconstruction;Compressed sensing;Scattering;Calibration;compressive sensing;calibration;MMV;experimental study;optical imaging;scattering media},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922033.pdf},\n}\n\n
\n
\n\n\n
\n Compressive sensing exploits the structure of signals to acquire them with fewer measurements than required by the Nyquist-Shannon theory. However, the design of practical compressive sensing hardware raises several issues. First, one has to elicit a measurement mechanism that exhibits adequate incoherence properties. Second, the system should be robust to noise, whether it be measurement noise, or calibration noise, i.e. discrepancies between theoretical and actual measurement matrices. Third, to improve performance in the case of strong noise, it is not clear whether one should increase the number of sensors, or rather take several measurements, thus settling in the multiple measurement vector scenario (MMV). Here, we first show how measurement matrices may be estimated by calibration instead of being assumed perfectly known, and second that if the noise level reaches a few percents of the signal level, MMV is the only way to sample sparse signals at sub-Nyquist sampling rates.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive randomized coordinate descent for solving sparse systems.\n \n \n \n \n\n\n \n Onose, A.; and Dumitrescu, B.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 721-725, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952223,\n  author = {A. Onose and B. Dumitrescu},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive randomized coordinate descent for solving sparse systems},\n  year = {2014},\n  pages = {721-725},\n  abstract = {Randomized coordinate descent (RCD), attractive for its robustness and ability to cope with large scale problems, is here investigated for the first time in an adaptive context. We present an RCD adaptive algorithm for finding sparse least-squares solutions to linear systems, in particular for FIR channel identification. The algorithm has low and tunable complexity and, as a special feature, adapts the probabilities with which the coordinates are chosen at each time moment. We show through simulation that the algorithm has tracking properties near those of the best current methods and investigate the trade-offs in the choices of the parameters.},\n  keywords = {adaptive signal processing;FIR filters;least squares approximations;linear systems;probability;adaptive randomized coordinate descent algoritm;sparse systems;RCD adaptive algorithm;sparse least-squares solutions;large scale problems;linear systems;FIR channel identification;time moment;tracking properties;Complexity theory;Convergence;Matching pursuit algorithms;Adaptive algorithms;Linear systems;Buildings;Context;adaptive algorithm;channel identification;sparse filter;least squares;coordinate descent;randomization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922113.pdf},\n}\n\n
\n
\n\n\n
\n Randomized coordinate descent (RCD), attractive for its robustness and ability to cope with large scale problems, is here investigated for the first time in an adaptive context. We present an RCD adaptive algorithm for finding sparse least-squares solutions to linear systems, in particular for FIR channel identification. The algorithm has low and tunable complexity and, as a special feature, adapts the probabilities with which the coordinates are chosen at each time moment. We show through simulation that the algorithm has tracking properties near those of the best current methods and investigate the trade-offs in the choices of the parameters.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n RLS sparse system identification using LAR-based situational awareness.\n \n \n \n \n\n\n \n Valdman, C.; De Campos, M. L. R.; and Apolinário, J. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 726-730, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RLSPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952224,\n  author = {C. Valdman and M. L. R. {De Campos} and J. A. Apolinário},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {RLS sparse system identification using LAR-based situational awareness},\n  year = {2014},\n  pages = {726-730},\n  abstract = {In this paper we propose the combination of the recursive least squares (RLS) and the least angle regression (LAR) algorithms for nonlinear system identification. In the application of interest, the model possesses a large number of coefficients, of which only few are different from zero. We use the LAR algorithm together with a geometrical stopping criterion to establish the number and position of the coefficients to be estimated by the RLS algorithm. The output error is used for indicating model inadequacy and therefore triggering the LAR algorithm. The proposed scheme is capable of modeling intrinsically sparse systems with better accuracy than the RLS algorithm alone, and lower energy consumption.},\n  keywords = {identification;least squares approximations;nonlinear systems;regression analysis;signal processing;RLS sparse system identification;LAR-based situational awareness;recursive least square algorithm;least angle regression algorithms;nonlinear system identification;geometrical stopping criterion;output error;intrinsically sparse system modelling;energy consumption;signal processing algorithms;Signal processing algorithms;Vectors;Indexes;Computational complexity;Heuristic algorithms;Nonlinear systems;Recursive Least Squares;Least Angle Regression;Volterra},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923627.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we propose the combination of the recursive least squares (RLS) and the least angle regression (LAR) algorithms for nonlinear system identification. In the application of interest, the model possesses a large number of coefficients, of which only few are different from zero. We use the LAR algorithm together with a geometrical stopping criterion to establish the number and position of the coefficients to be estimated by the RLS algorithm. The output error is used for indicating model inadequacy and therefore triggering the LAR algorithm. The proposed scheme is capable of modeling intrinsically sparse systems with better accuracy than the RLS algorithm alone, and lower energy consumption.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-power simplex ultrasound communication for indoor localization.\n \n \n \n \n\n\n \n Ens, A.; Reindl, L. M.; Janson, T.; and Schindelhauer, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 731-735, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Low-powerPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952225,\n  author = {A. Ens and L. M. Reindl and T. Janson and C. Schindelhauer},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Low-power simplex ultrasound communication for indoor localization},\n  year = {2014},\n  pages = {731-735},\n  abstract = {We propose an ultrasound communication system designed for time difference of arrival (TDOA) based indoor localization. The concept involves an infrastructure of stationary and independent senders tracking mobile receivers. The main goal is pure line-of-sight (LOS) communication for correct localization. When ignoring the reception energy of multipaths the transmission range is reduced to 20 meters and we need more devices to cover the same area (0.03 devices/m2). Thus, for cost-effectiveness and easy installation, we focus in the sender design on low power consumption for long battery or even energy independent operation. Moreover, we use the energy efficient π=4-DQPSK modulation technique to send 8 data bits in 3.5 ms. An identifier in each message along with the reception time can be used for TDOA localization. The frame synchronization error for a distance of 20m at 3 dB SNR is 11.2 ns. Thus, for speed of sound the distance measurement error is 3.7 μm.},\n  keywords = {differential phase shift keying;direction-of-arrival estimation;indoor communication;power consumption;quadrature phase shift keying;time-of-arrival estimation;energy efficient π/4-DQPSK modulation technique;independent sender tracking mobile receivers;distance measurement error;frame synchronization error;TDOA localization;long battery;energy independent operation;low power consumption;sender design;multipath reception energy;stationary infrastructure;line-of-sight communication;LOS communication;time difference of arrival based indoor localization;low-power simplex ultrasound communication system;time 3.5 ms;distance 20 m;time 11.2 ns;Ultrasonic imaging;Receivers;Synchronization;Signal to noise ratio;Acoustics;Frequency shift keying;Mobile communication;Ultrasound;DQPSK;Transmission;ToA;TDOA;Localization;Communication;Line of Sight;Class-E Amplifier},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924685.pdf},\n}\n\n
\n
\n\n\n
\n We propose an ultrasound communication system designed for time difference of arrival (TDOA) based indoor localization. The concept involves an infrastructure of stationary and independent senders tracking mobile receivers. The main goal is pure line-of-sight (LOS) communication for correct localization. When ignoring the reception energy of multipaths the transmission range is reduced to 20 meters and we need more devices to cover the same area (0.03 devices/m2). Thus, for cost-effectiveness and easy installation, we focus in the sender design on low power consumption for long battery or even energy independent operation. Moreover, we use the energy efficient π=4-DQPSK modulation technique to send 8 data bits in 3.5 ms. An identifier in each message along with the reception time can be used for TDOA localization. The frame synchronization error for a distance of 20m at 3 dB SNR is 11.2 ns. Thus, for speed of sound the distance measurement error is 3.7 μm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sub-Nyquist 1 bit sampling system for sparse multiband signals.\n \n \n \n \n\n\n \n Fu, N.; Yang, L.; and Zhang, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 736-740, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Sub-NyquistPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952226,\n  author = {N. Fu and L. Yang and J. Zhang},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Sub-Nyquist 1 bit sampling system for sparse multiband signals},\n  year = {2014},\n  pages = {736-740},\n  abstract = {Efficient sampling of wideband analog signals is a hard problem because their Nyquist rates may exceed the specifications of the analog-to-digital converters by magnitude. Modulated Wideband Converter (MWC) is a known method to sample sparse multiband signals below the Nyquist rate and the precision recovery relies on high precision quantization of samples which may take a great bit-budget. This paper proposes an alternative system that optimizes space utilization by applying comparator in sub-Nyquist sampling system. The system first multiplies the signal by a bank of periodic waveforms, and then it performs lowpass filtering, sampling and quantization through the comparator which just keeps the sign information. And we introduce a corresponding algorithm for perfect recovery. The primary design goals are efficient hardware implementation and low bit-budget. We compare our system with MWC to prove its advantages in condition of fixed bit-budget, particularly in low levels of input signal to noise ratio.},\n  keywords = {analogue-digital conversion;comparators (circuits);low-pass filters;quantisation (signal);signal sampling;sub-Nyquist sampling system;wideband analog signal sampling;Nyquist rates;analog-to-digital converters;modulated wideband converter;MWC;sample sparse multiband signal method;high precision sample quantization;space utilization;periodic waveforms;comparator;low-pass filtering;sign information;low bit-budget;fixed bit-budget condition;input signal to noise ratio;word length 1 bit;Wideband;Quantization (signal);Compressed sensing;Vectors;Matching pursuit algorithms;Noise;Robustness;Compressive sensing;sub-Nyquist sampling;sparse multiband signal;modulated wideband converter;1 bit quantization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925013.pdf},\n}\n\n
\n
\n\n\n
\n Efficient sampling of wideband analog signals is a hard problem because their Nyquist rates may exceed the specifications of the analog-to-digital converters by magnitude. Modulated Wideband Converter (MWC) is a known method to sample sparse multiband signals below the Nyquist rate and the precision recovery relies on high precision quantization of samples which may take a great bit-budget. This paper proposes an alternative system that optimizes space utilization by applying comparator in sub-Nyquist sampling system. The system first multiplies the signal by a bank of periodic waveforms, and then it performs lowpass filtering, sampling and quantization through the comparator which just keeps the sign information. And we introduce a corresponding algorithm for perfect recovery. The primary design goals are efficient hardware implementation and low bit-budget. We compare our system with MWC to prove its advantages in condition of fixed bit-budget, particularly in low levels of input signal to noise ratio.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast reconstruction of nonuniformly sampled bandlimited signal using Slepian functions.\n \n \n \n \n\n\n \n Rzepka, D.; and Miśkowicz, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 741-745, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952227,\n  author = {D. Rzepka and M. Miśkowicz},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Fast reconstruction of nonuniformly sampled bandlimited signal using Slepian functions},\n  year = {2014},\n  pages = {741-745},\n  abstract = {In this paper, we present an algorithm for fast reconstruction of bandlimited signal from nonuniform samples using shift-invariant space with Slepian function as a generator. The motivation to use Slepian functions is that they are bandlimited and most of their energy is concentrated in the finite time interval [-τ, τ]. This allows their truncation in time with controllable error, and results in a reduction of computational complexity of reconstruction process to O(N L2). where N is number of samples, and L ≈ τ. As decreasing τ increases the truncation error, the algorithm offers a tradeoff between speed and accuracy. The simulation example of signal reconstruction is provided.},\n  keywords = {computational complexity;signal reconstruction;nonuniformly-sampled bandlimited signal;Slepian functions;bandlimited signal reconstruction;shift-invariant space;finite time interval;computational complexity reduction;reconstruction process;truncation error;Abstracts;Ions;Splines (mathematics);nonuniform sampling;signal reconstruction;fast algorithm;Slepian functions;prolate spheroidal wave functions},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925125.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present an algorithm for fast reconstruction of bandlimited signal from nonuniform samples using shift-invariant space with Slepian function as a generator. The motivation to use Slepian functions is that they are bandlimited and most of their energy is concentrated in the finite time interval [-τ, τ]. This allows their truncation in time with controllable error, and results in a reduction of computational complexity of reconstruction process to O(N L2). where N is number of samples, and L ≈ τ. As decreasing τ increases the truncation error, the algorithm offers a tradeoff between speed and accuracy. The simulation example of signal reconstruction is provided.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compressive blind source recovery with Random Demodulation.\n \n \n \n \n\n\n \n Fu, N.; Yao, T.; and Xu, H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 746-750, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CompressivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952228,\n  author = {N. Fu and T. Yao and H. Xu},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Compressive blind source recovery with Random Demodulation},\n  year = {2014},\n  pages = {746-750},\n  abstract = {Distributed Compressive Sensing (DCS) theory effectively reduces the number of measurements of each signal, by exploiting both intra- and inter-signal correlation structures, which saves on the costs of sampling devices as well as of communication and data processing. In many fields, only the mixtures of source signals are available for compressive sampling, without prior information on both the source signals and the mixing process. However, people are still interested in the source signal rather than the mixing signals. There is a basic solution which reconstructs the mixing signals from the compressive measurements first and then separates the source signals by estimating mixing matrix. However, the reconstruction process takes considerable time and also introduces error into the estimation step. A novel method is proposed in this paper, which directly separates the mixing compressive measurements by estimating the mixing matrix first and then reconstruct the interesting source signals. At the same time, in most situations, the source signals are analog signals. In this paper, Random Demodulation (RD) system is introduced to compressively sample the analog signal. We also verify the independence and non-Gaussian property of the compressive measurement. The experimental results proves that the proposed method is feasible and compared to the basic method, the estimation accuracy is improved.},\n  keywords = {blind source separation;compressed sensing;compressive blind source recovery;random demodulation;distributed compressive sensing;DCS theory;intersignal correlation structures;intrasignal correlation structures;compressive sampling;compressive measurements;mixing matrix estimation;analog signals;source signals;random demodulation system;RD system;analog signal;Signal processing algorithms;Compressed sensing;Matching pursuit algorithms;Mutual information;Signal to noise ratio;Demodulation;Distributed compressive sensing (DCS);independent component analysis (ICA);random demodulation (RD);mixing matrix estimating},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925193.pdf},\n}\n\n
\n
\n\n\n
\n Distributed Compressive Sensing (DCS) theory effectively reduces the number of measurements of each signal, by exploiting both intra- and inter-signal correlation structures, which saves on the costs of sampling devices as well as of communication and data processing. In many fields, only the mixtures of source signals are available for compressive sampling, without prior information on both the source signals and the mixing process. However, people are still interested in the source signal rather than the mixing signals. There is a basic solution which reconstructs the mixing signals from the compressive measurements first and then separates the source signals by estimating mixing matrix. However, the reconstruction process takes considerable time and also introduces error into the estimation step. A novel method is proposed in this paper, which directly separates the mixing compressive measurements by estimating the mixing matrix first and then reconstruct the interesting source signals. At the same time, in most situations, the source signals are analog signals. In this paper, Random Demodulation (RD) system is introduced to compressively sample the analog signal. We also verify the independence and non-Gaussian property of the compressive measurement. The experimental results proves that the proposed method is feasible and compared to the basic method, the estimation accuracy is improved.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the steady-state and tracking analysis of the complex SRLMS algorithm.\n \n \n \n \n\n\n \n Faiz, M. M. U.; and Zerguine, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 751-754, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952229,\n  author = {M. M. U. Faiz and A. Zerguine},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {On the steady-state and tracking analysis of the complex SRLMS algorithm},\n  year = {2014},\n  pages = {751-754},\n  abstract = {In this paper, the steady-state and tracking behavior of the complex signed regressor least mean square (SRLMS) algorithm are analyzed in stationary and nonstationary environments, respectively. Here, the SRLMS algorithm is analyzed in the presence of complex-valued white and correlated Gaussian input data. Moreover, a comparison between the convergence performance of the complex SRLMS algorithm and the complex least mean square (LMS) algorithm is also presented. Finally, simulation results are presented to support our analytical findings.},\n  keywords = {Gaussian noise;least mean squares methods;regression analysis;complex SRLMS algorithm;signed regressor least mean square algorithm;complex valued white Gaussian input data;correlated Gaussian input data;Least squares approximations;Steady-state;Algorithm design and analysis;Signal processing algorithms;Convergence;Noise;Vectors;LMS;SRLMS;Steady-state;Tracking},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925217.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, the steady-state and tracking behavior of the complex signed regressor least mean square (SRLMS) algorithm are analyzed in stationary and nonstationary environments, respectively. Here, the SRLMS algorithm is analyzed in the presence of complex-valued white and correlated Gaussian input data. Moreover, a comparison between the convergence performance of the complex SRLMS algorithm and the complex least mean square (LMS) algorithm is also presented. Finally, simulation results are presented to support our analytical findings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Opencl parallelization of the HEVC de-quantization and inverse transform for heterogeneous platforms.\n \n \n \n \n\n\n \n De Souza, D. F.; Roma, N.; and Sousa, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 755-759, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OpenclPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952230,\n  author = {D. F. {De Souza} and N. Roma and L. Sousa},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Opencl parallelization of the HEVC de-quantization and inverse transform for heterogeneous platforms},\n  year = {2014},\n  pages = {755-759},\n  abstract = {To tackle the growing demand for high efficient implementations of video decoders in a vast set of heterogeneous platforms, a high performance implementation of the HEVC de-quantization and inverse Discrete Cosine Transform (IDCT) modules is proposed. To efficiently take advantage of the several different GPU architectures that are currently available on these platforms, the proposed modules consist on unified OpenCL implementations, allowing their migration and acceleration in any of the available devices of current heterogeneous platforms. To achieve such objective, the memory accesses were highly optimized and no synchronization points were required, in order to attain the maximum performance. The presented experimental results evaluated the proposed implementation in three different GPUs, achieving processing times as low as 6.39 ms and 6.51 ms for Ultra HD 4K I-type and B-type frames, respectively, corresponding to speedup factors as high as 18.9× and 16.5× over the HEVC Test Model (HM) version 11.0.},\n  keywords = {discrete cosine transforms;graphics processing units;video coding;OpenCL parallelization;HEVC dequantization;inverse transform;heterogeneous platforms;inverse discrete cosine transform;IDCT;GPU architectures;unified OpenCL implementations;memory accesses;GPU;ultra HD 4K I-type frames;B-type frames;HEVC test model;HM;Graphics processing units;Laplace equations;Decoding;Kernel;Video coding;Discrete cosine transforms;Video coding;HEVC;de-quantization;transform coefficient decoding;Graphics Processing Unit (GPU);parallel processing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925255.pdf},\n}\n\n
\n
\n\n\n
\n To tackle the growing demand for high efficient implementations of video decoders in a vast set of heterogeneous platforms, a high performance implementation of the HEVC de-quantization and inverse Discrete Cosine Transform (IDCT) modules is proposed. To efficiently take advantage of the several different GPU architectures that are currently available on these platforms, the proposed modules consist on unified OpenCL implementations, allowing their migration and acceleration in any of the available devices of current heterogeneous platforms. To achieve such objective, the memory accesses were highly optimized and no synchronization points were required, in order to attain the maximum performance. The presented experimental results evaluated the proposed implementation in three different GPUs, achieving processing times as low as 6.39 ms and 6.51 ms for Ultra HD 4K I-type and B-type frames, respectively, corresponding to speedup factors as high as 18.9× and 16.5× over the HEVC Test Model (HM) version 11.0.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive identification of sparse systems using the slim approach.\n \n \n \n \n\n\n \n Glentis, G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 760-764, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952231,\n  author = {G. Glentis},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive identification of sparse systems using the slim approach},\n  year = {2014},\n  pages = {760-764},\n  abstract = {In this paper, a novel time recursive implementation of the Sparse Learning via Iterative Minimization (SLIM) algorithm is proposed, in the context of adaptive system identification. The proposed scheme exhibits fast convergence and tracking ability at an affordable computational cost. Numerical simulations illustrate the achieved performance gain in comparison to other existing adaptive sparse system identification techniques.},\n  keywords = {adaptive signal processing;compressed sensing;identification;iterative methods;learning (artificial intelligence);minimisation;SLIM approach;sparse learning via iterative minimization algorithm;time recursive algorithm;numerical simulations;computational cost;fast convergence;tracking ability;adaptive sparse system identification techniques;compressing sensing;Signal processing algorithms;Radio frequency;Adaptive systems;Convergence;Context;Algorithm design and analysis;Signal processing;Adaptive system identification;Sparse systems;SLIM algorithm},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925467.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a novel time recursive implementation of the Sparse Learning via Iterative Minimization (SLIM) algorithm is proposed, in the context of adaptive system identification. The proposed scheme exhibits fast convergence and tracking ability at an affordable computational cost. Numerical simulations illustrate the achieved performance gain in comparison to other existing adaptive sparse system identification techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Performance limits of dictionary learning for sparse coding.\n \n \n \n \n\n\n \n Jung, A.; Eldar, Y. C.; and Görtz, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 765-769, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PerformancePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952232,\n  author = {A. Jung and Y. C. Eldar and N. Görtz},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Performance limits of dictionary learning for sparse coding},\n  year = {2014},\n  pages = {765-769},\n  abstract = {We consider the problem of dictionary learning under the assumption that the observed signals can be represented as sparse linear combinations of the columns of a single large dictionary matrix. In particular, we analyze the minimax risk of the dictionary learning problem which governs the mean squared error (MSE) performance of any learning scheme, regardless of its computational complexity. By following an established information-theoretic method based on Fano's inequality, we derive a lower bound on the minimax risk for a given dictionary learning problem. This lower bound yields a characterization of the sample-complexity, i.e., a lower bound on the required number of observations such that consistent dictionary learning schemes exist. Our bounds may be compared with the performance of a given learning scheme, allowing to characterize how far the method is from optimal performance.},\n  keywords = {computational complexity;encoding;mean square error methods;minimax techniques;sparse coding;sparse linear combinations;single-large-dictionary matrix;minimax risk;dictionary learning problem;mean squared error performance;MSE performance;computational complexity;information-theoretic method;Fano inequality;sample-complexity characterization;Dictionaries;Vectors;Indexes;Estimation;Mutual information;Signal to noise ratio;Compressed sensing;Dictionary Identification;Dictionary Learning;Big Data;Minimax Risk;Fano Inequality},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917473.pdf},\n}\n\n
\n
\n\n\n
\n We consider the problem of dictionary learning under the assumption that the observed signals can be represented as sparse linear combinations of the columns of a single large dictionary matrix. In particular, we analyze the minimax risk of the dictionary learning problem which governs the mean squared error (MSE) performance of any learning scheme, regardless of its computational complexity. By following an established information-theoretic method based on Fano's inequality, we derive a lower bound on the minimax risk for a given dictionary learning problem. This lower bound yields a characterization of the sample-complexity, i.e., a lower bound on the required number of observations such that consistent dictionary learning schemes exist. Our bounds may be compared with the performance of a given learning scheme, allowing to characterize how far the method is from optimal performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Separable cosparse Analysis Operator learning.\n \n \n \n \n\n\n \n Seibert, M.; Wörmann, J.; Gribonval, R.; and Kleinsteuber, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 770-774, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SeparablePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952253,\n  author = {M. Seibert and J. Wörmann and R. Gribonval and M. Kleinsteuber},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Separable cosparse Analysis Operator learning},\n  year = {2014},\n  pages = {770-774},\n  abstract = {The ability of having a sparse representation for a certain class of signals has many applications in data analysis, image processing, and other research fields. Among sparse representations, the cosparse analysis model has recently gained increasing interest. Many signals exhibit a multidimensional structure, e.g. images or three-dimensional MRI scans. Most data analysis and learning algorithms use vectorized signals and thereby do not account for this underlying structure. The drawback of not taking the inherent structure into account is a dramatic increase in computational cost. We propose an algorithm for learning a cosparse Analysis Operator that adheres to the preexisting structure of the data, and thus allows for a very efficient implementation. This is achieved by enforcing a separable structure on the learned operator. Our learning algorithm is able to deal with multidimensional data of arbitrary order. We evaluate our method on volumetric data at the example of three-dimensional MRI scans.},\n  keywords = {biomedical MRI;data analysis;fast Fourier transforms;inverse transforms;learning (artificial intelligence);medical image processing;separable cosparse analysis operator learning;sparse representation;data analysis;vectorized signals;three-dimensional MRI scans;MAOL;multilinear algebra;geometric optimization;Tensile stress;Magnetic resonance imaging;Algorithm design and analysis;Image reconstruction;Analytical models;Noise;Signal processing algorithms;Cosparse Analysis Model;Analysis Operator Learning;Sparse Coding;Separable Filters},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924571.pdf},\n}\n\n
\n
\n\n\n
\n The ability of having a sparse representation for a certain class of signals has many applications in data analysis, image processing, and other research fields. Among sparse representations, the cosparse analysis model has recently gained increasing interest. Many signals exhibit a multidimensional structure, e.g. images or three-dimensional MRI scans. Most data analysis and learning algorithms use vectorized signals and thereby do not account for this underlying structure. The drawback of not taking the inherent structure into account is a dramatic increase in computational cost. We propose an algorithm for learning a cosparse Analysis Operator that adheres to the preexisting structure of the data, and thus allows for a very efficient implementation. This is achieved by enforcing a separable structure on the learned operator. Our learning algorithm is able to deal with multidimensional data of arbitrary order. We evaluate our method on volumetric data at the example of three-dimensional MRI scans.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n K-LDA: An algorithm for learning jointly overcomplete and discriminative dictionaries.\n \n \n \n \n\n\n \n Golmohammady, J.; Joneidi, M.; Sadeghi, M.; Babaie-Zadeh, M.; and Jutten, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 775-779, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"K-LDA:Paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952254,\n  author = {J. Golmohammady and M. Joneidi and M. Sadeghi and M. Babaie-Zadeh and C. Jutten},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {K-LDA: An algorithm for learning jointly overcomplete and discriminative dictionaries},\n  year = {2014},\n  pages = {775-779},\n  abstract = {A new algorithm for learning jointly reconstructive and discriminative dictionaries for sparse representation (SR) is presented. While in a usual dictionary learning algorithm like K-SVD only the reconstructive aspect of the sparse representations is considered to learn a dictionary, in our proposed algorithm, which we call K-LDA, the discriminative aspect of the sparse representations is also addressed. In fact, K-LDA is an extension of K-SVD in the case that the class informations (labels) of the training data are also available. K-LDA takes into account these information in order to make the sparse representations more discriminate. It makes a trade-off between the amount of reconstruction error, sparsity, and discrimination of sparse representations. Simulation results on synthetic and hand-written data demonstrate the promising performance of our proposed algorithm.},\n  keywords = {signal processing;singular value decomposition;K-LDA;discriminative dictionaries;reconstructive dictionaries;sparse representation;dictionary learning algorithm;K-SVD;Dictionaries;Signal processing algorithms;Training data;Image reconstruction;Training;Vectors;Linear programming;Dictionary Learning;Singular Value Decomposition;Linear Discriminant Analysis;Discriminative Learning},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925501.pdf},\n}\n\n
\n
\n\n\n
\n A new algorithm for learning jointly reconstructive and discriminative dictionaries for sparse representation (SR) is presented. While in a usual dictionary learning algorithm like K-SVD only the reconstructive aspect of the sparse representations is considered to learn a dictionary, in our proposed algorithm, which we call K-LDA, the discriminative aspect of the sparse representations is also addressed. In fact, K-LDA is an extension of K-SVD in the case that the class informations (labels) of the training data are also available. K-LDA takes into account these information in order to make the sparse representations more discriminate. It makes a trade-off between the amount of reconstruction error, sparsity, and discrimination of sparse representations. Simulation results on synthetic and hand-written data demonstrate the promising performance of our proposed algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The atomic norm formulation of OSCAR regularization with application to the Frank-Wolfe algorithm.\n \n \n \n \n\n\n \n Zeng, X.; and Figueiredo, M. A. T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 780-784, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952255,\n  author = {X. Zeng and M. A. T. Figueiredo},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {The atomic norm formulation of OSCAR regularization with application to the Frank-Wolfe algorithm},\n  year = {2014},\n  pages = {780-784},\n  abstract = {This paper proposes atomic norm formulation of octagonal shrinkage and clustering algorithm for regression (OSCAR) regularization. The OSCAR regularizer can be reformulated using a decreasing weighted sorted ℓ1 (DWSL1) norm (which is shown to be convex). We also show how, by exploiting an atomic norm formulation, the Ivanov regularization scheme involving the OSCAR regularizer can be handled using the Frank-Wolfe (also known as conditional gradient) method.},\n  keywords = {compressed sensing;gradient methods;pattern clustering;regression analysis;conditional gradient method;Frank-Wolfe method;Ivanov regularization scheme;DWSL1 norm;decreasing weighted sorted ℓ1 norm;OSCAR regularization;octagonal shrinkage and clustering algorithm for regression;atomic norm formulation;Bismuth;Signal processing algorithms;Vectors;Clustering algorithms;Gradient methods;Algorithm design and analysis;Convex functions;Group sparsity;atomic norm;Ivanov regularization;conditional gradient method;Frank-Wolfe algorithm},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926687.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes atomic norm formulation of octagonal shrinkage and clustering algorithm for regression (OSCAR) regularization. The OSCAR regularizer can be reformulated using a decreasing weighted sorted ℓ1 (DWSL1) norm (which is shown to be convex). We also show how, by exploiting an atomic norm formulation, the Ivanov regularization scheme involving the OSCAR regularizer can be handled using the Frank-Wolfe (also known as conditional gradient) method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cardinal sparse partial least square feature selection and its application in face recognition.\n \n \n \n \n\n\n \n Zhang, H.; Kiranyaz, S.; and Gabbouj, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 785-789, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CardinalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952256,\n  author = {H. Zhang and S. Kiranyaz and M. Gabbouj},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Cardinal sparse partial least square feature selection and its application in face recognition},\n  year = {2014},\n  pages = {785-789},\n  abstract = {Many modern computer vision systems combine high dimensional features and linear classifiers to achieve better classification accuracy. However, the excessively long features are often highly redundant; thus dramatically increases the system storage and computational load. This paper presents a novel feature selection algorithm, namely cardinal sparse partial least square algorithm, to address this deficiency in an effective way. The proposed algorithm is based on the sparse solution of partial least square regression. It aims to select a sufficiently large number of features, which can achieve good accuracy when used with linear classifiers. We applied the algorithm to a face recognition system and achieved the stateof- the-art results with significantly shorter feature vectors.},\n  keywords = {computer vision;face recognition;least squares approximations;cardinal sparse partial least square feature selection;computer vision systems;novel feature selection algorithm;partial least square algorithm;partial least square regression;face recognition system;Face;Face recognition;Computer vision;Databases;Vectors;Conferences;Signal processing algorithms;Feature selection;sparse partial least square;face recognition},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924329.pdf},\n}\n\n
\n
\n\n\n
\n Many modern computer vision systems combine high dimensional features and linear classifiers to achieve better classification accuracy. However, the excessively long features are often highly redundant; thus dramatically increases the system storage and computational load. This paper presents a novel feature selection algorithm, namely cardinal sparse partial least square algorithm, to address this deficiency in an effective way. The proposed algorithm is based on the sparse solution of partial least square regression. It aims to select a sufficiently large number of features, which can achieve good accuracy when used with linear classifiers. We applied the algorithm to a face recognition system and achieved the stateof- the-art results with significantly shorter feature vectors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fighting against forged documents by using textured image.\n \n \n \n \n\n\n \n Tkachenko, I.; Puech, W.; Strauss, O.; Gaudin, J. -.; Destruel, C.; and Guichard, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 790-794, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"FightingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952257,\n  author = {I. Tkachenko and W. Puech and O. Strauss and J. -. Gaudin and C. Destruel and C. Guichard},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Fighting against forged documents by using textured image},\n  year = {2014},\n  pages = {790-794},\n  abstract = {Verification of a document legitimacy is a current important problem. In this paper we propose to use a textured image containing a visual message, which can be used for identification of differences between printed legitimate document and printed fake document. The suggested textured image consists of specific patterns which should satisfy particular conditions in order to give good recognition results after Print-and-Scan (P&S) process. The identification of a legitimate document is possible by correlating the patterns of the textured image with either the original patterns or representative P&S process patterns. Several experimental results validate the proposed verification method.},\n  keywords = {image recognition;image texture;forged documents;textured image;document legitimacy verification;visual message;printed legitimate document;printed fake document;print-and-scan process;Visualization;Correlation;Pattern recognition;Printers;Authentication;Printing;pattern recognition;print-and-scan process;document legitimacy;correlation measure},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917463.pdf},\n}\n\n
\n
\n\n\n
\n Verification of a document legitimacy is a current important problem. In this paper we propose to use a textured image containing a visual message, which can be used for identification of differences between printed legitimate document and printed fake document. The suggested textured image consists of specific patterns which should satisfy particular conditions in order to give good recognition results after Print-and-Scan (P&S) process. The identification of a legitimate document is possible by correlating the patterns of the textured image with either the original patterns or representative P&S process patterns. Several experimental results validate the proposed verification method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Color laser printer identification using photographed halftone images.\n \n \n \n \n\n\n \n Kim, D.; and Lee, H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 795-799, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ColorPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952258,\n  author = {D. Kim and H. Lee},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Color laser printer identification using photographed halftone images},\n  year = {2014},\n  pages = {795-799},\n  abstract = {Due to the spread of color laser printers to the general public, numerous forgeries are made by color laser printers. Printer identification is essential to preventing damage caused by color laser printed forgeries. This paper presents a new method to identify a color laser printer using photographed halftone images. First, we preprocess the photographed images to extract the halftone pattern regardless of the variation of the illumination conditions. Then, 15 halftone texture features are extracted from the preprocessed images. A support vector machine is used to be trained and classify the extracted features. Experiments are performed on seven color laser printers. The experimental results show that the proposed method is suitable for identifying the source color laser printer using photographed images.},\n  keywords = {feature extraction;image classification;support vector machines;color laser printer identification;photographed halftone images;color laser printed forgeries;texture feature extraction;support vector machine;image classification;Feature extraction;Printers;Image color analysis;Printing;Discrete Fourier transforms;Lasers;Colored noise;Digital forensics;Printer identification;Discrete Fourier Transform},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924487.pdf},\n}\n\n
\n
\n\n\n
\n Due to the spread of color laser printers to the general public, numerous forgeries are made by color laser printers. Printer identification is essential to preventing damage caused by color laser printed forgeries. This paper presents a new method to identify a color laser printer using photographed halftone images. First, we preprocess the photographed images to extract the halftone pattern regardless of the variation of the illumination conditions. Then, 15 halftone texture features are extracted from the preprocessed images. A support vector machine is used to be trained and classify the extracted features. Experiments are performed on seven color laser printers. The experimental results show that the proposed method is suitable for identifying the source color laser printer using photographed images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Authentication using graphical codes: Optimisation of the print and scan channels.\n \n \n \n \n\n\n \n Phan Ho, A.; Mai Hoang, B.; Sawaya, W.; and Bas, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 800-804, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AuthenticationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952259,\n  author = {A. {Phan Ho} and B. {Mai Hoang} and W. Sawaya and P. Bas},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Authentication using graphical codes: Optimisation of the print and scan channels},\n  year = {2014},\n  pages = {800-804},\n  abstract = {In this paper we propose to cast the problem of authentication of printed documents using binary codes into an optimization game between the legitimate source and the opponent, each player tries to select the best print and scan channel to minimize/maximize his authentication performance. It is possible to solve this game by considering accurate computations of the type I and type II probability errors and by using additive stochastic processes to model the print and scan channel. Considering the print and scan models as Lognormal or Generalized gaussian additive processes, we maximize the authentication performances for two different security scenarios. The first one considers the opponent as passive and assumes that his print-and-scan channel is the same as the legitimate channel. The second scenario devises a minimax game where an active opponent tries to maximize the probability of non-detection by choosing appropriate parameters on his channel. Our first conclusions are the facts that (i) the authentication performance is better for dense noises than for sparse noises for both scenarios, and (ii) for both families of distribution, the opponent optimal parameters are close to the legitimate source parameters, and (iii) the legitimate source can find a configuration which maximizes the authentication performance.},\n  keywords = {binary codes;Gaussian processes;optimisation;probability;minimax game;generalized Gaussian additive processes;lognormal Gaussian additive processes;additive stochastic processes;probability errors;binary codes;printed documents;graphical codes;authentication;Authentication;Gaussian distribution;Games;Noise;Printing;Binary codes;Standards;Authentication;Hypothesis testing;minimax game;print and scan models},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925347.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we propose to cast the problem of authentication of printed documents using binary codes into an optimization game between the legitimate source and the opponent, each player tries to select the best print and scan channel to minimize/maximize his authentication performance. It is possible to solve this game by considering accurate computations of the type I and type II probability errors and by using additive stochastic processes to model the print and scan channel. Considering the print and scan models as Lognormal or Generalized gaussian additive processes, we maximize the authentication performances for two different security scenarios. The first one considers the opponent as passive and assumes that his print-and-scan channel is the same as the legitimate channel. The second scenario devises a minimax game where an active opponent tries to maximize the probability of non-detection by choosing appropriate parameters on his channel. Our first conclusions are the facts that (i) the authentication performance is better for dense noises than for sparse noises for both scenarios, and (ii) for both families of distribution, the opponent optimal parameters are close to the legitimate source parameters, and (iii) the legitimate source can find a configuration which maximizes the authentication performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimized HOG for on-road video based vehicle verification.\n \n \n \n \n\n\n \n Ballesteros, G.; and Salgado, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 805-809, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OptimizedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952260,\n  author = {G. Ballesteros and L. Salgado},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Optimized HOG for on-road video based vehicle verification},\n  year = {2014},\n  pages = {805-809},\n  abstract = {Vision-based object detection from a moving platform becomes particularly challenging in the field of advanced driver assistance systems (ADAS). In this context, onboard vision-based vehicle verification strategies become critical, facing challenges derived from the variability of vehicles appearance, illumination, and vehicle speed. In this paper, an optimized HOG configuration for onboard vehicle verification is proposed which not only considers its spatial and orientation resolution, but descriptor processing strategies and classification. An in-depth analysis of the optimal settings for HOG for onboard vehicle verification is presented, in the context of SVM classification with different kernels. In contrast to many existing approaches, the evaluation is realized in a public and heterogeneous database of vehicle and non-vehicle images in different areas of the road, rendering excellent verification rates that outperform other similar approaches in the literature.},\n  keywords = {computer vision;driver information systems;image classification;image resolution;support vector machines;video signal processing;on-road video based vehicle verification;vision-based object detection;advanced driver assistance systems;ADAS;onboard vision-based vehicle verification strategy;vehicle appearance variability;illumination;vehicle speed;optimized HOG configuration;orientation resolution;descriptor processing strategy;image classification;spatial resolution;SVM classification;heterogeneous database;public database;nonvehicle image database;vehicle image database;rendering;histograms of oriented gradients;Vehicles;Vehicle detection;Kernel;Feature extraction;Databases;Histograms;Standards;HOG;feature extraction;feature classification;video-based vehicle verification;O-HOG},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924423.pdf},\n}\n\n
\n
\n\n\n
\n Vision-based object detection from a moving platform becomes particularly challenging in the field of advanced driver assistance systems (ADAS). In this context, onboard vision-based vehicle verification strategies become critical, facing challenges derived from the variability of vehicles appearance, illumination, and vehicle speed. In this paper, an optimized HOG configuration for onboard vehicle verification is proposed which not only considers its spatial and orientation resolution, but descriptor processing strategies and classification. An in-depth analysis of the optimal settings for HOG for onboard vehicle verification is presented, in the context of SVM classification with different kernels. In contrast to many existing approaches, the evaluation is realized in a public and heterogeneous database of vehicle and non-vehicle images in different areas of the road, rendering excellent verification rates that outperform other similar approaches in the literature.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Particle swarm optimization for blurred contour retrieval.\n \n \n \n \n\n\n \n Marot, J.; and Bourennane, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 810-814, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ParticlePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952261,\n  author = {J. Marot and S. Bourennane},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Particle swarm optimization for blurred contour retrieval},\n  year = {2014},\n  pages = {810-814},\n  abstract = {This paper concentrates on the estimation of linear and circular blurred contours in an image. To solve this problem, we start from recently investigated signal models, derived through the association of an array of virtual sensors and the image. The array is linear when linear blurred contours are expected, and circular when circular blurred contours are expected. For the first time in this paper, we propose a common array processing model for both types of contours, which makes their retrieval closer to each other. We propose a common criterion to minimize for the estimation of the contour parameters, and justify the usage of particle swarm optimization for its minimization. An application to fire characterization exemplifies our method.},\n  keywords = {array signal processing;image restoration;image retrieval;minimisation;parameter estimation;particle swarm optimisation;sensor arrays;particle swarm optimization;blurred contour retrieval;image linear blurred contour estimation;image circular blurred contour estimation;signal models;virtual sensor array;array processing model;contour parameter estimation minimization;Particle swarm optimization;Arrays;Vectors;Estimation;Sensors;Transforms;Active contours;Blurred contour;Sensor Array;Optimization;Fire surveillance},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924693.pdf},\n}\n\n
\n
\n\n\n
\n This paper concentrates on the estimation of linear and circular blurred contours in an image. To solve this problem, we start from recently investigated signal models, derived through the association of an array of virtual sensors and the image. The array is linear when linear blurred contours are expected, and circular when circular blurred contours are expected. For the first time in this paper, we propose a common array processing model for both types of contours, which makes their retrieval closer to each other. We propose a common criterion to minimize for the estimation of the contour parameters, and justify the usage of particle swarm optimization for its minimization. An application to fire characterization exemplifies our method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Power minimization in the multiuser MIMO-OFDM broadcast channel with imperfect CSI.\n \n \n \n \n\n\n \n González-Coma, J. P.; Joham, M.; Castro, P. M.; and Castedo, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 815-819, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PowerPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952262,\n  author = {J. P. González-Coma and M. Joham and P. M. Castro and L. Castedo},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Power minimization in the multiuser MIMO-OFDM broadcast channel with imperfect CSI},\n  year = {2014},\n  pages = {815-819},\n  abstract = {This work addresses the design of linear precoders and receivers in multiuser Multiple-Input Multiple-Output (MIMO) downlink channels using Orthogonal Frequency Division Multiplexing (OFDM) modulation when only partial Channel State Information (CSI) is available at the transmitter. Our aim is to minimize the total transmit power subject to per-user Quality-of-Service (QoS) constraints expressed as per-user rates. We propose a gradient-projection algorithm to optimally distribute the per-user rates among the OFDM subcarriers. Then, another algorithm is used to obtain the per-subcarrier precoders and receivers that minimize the overall transmit power. Based on the Minimum Mean Square Error (MMSE) duality between the MIMO Broadcast Channel (BC) and the MIMO Multiple Access Channel (MAC), both algorithms perform an Alternating Optimization (AO).},\n  keywords = {mean square error methods;MIMO communication;minimisation;OFDM modulation;precoding;quality of service;wireless channels;power minimization;multiuser MIMO-OFDM broadcast channel;imperfect CSI;linear precoders;linear receivers;multiuser multiple-input multiple-output downlink channels;MIMO downlink channels;orthogonal frequency division multiplexing;partial channel state information;Quality-of-Service;QoS constraints;gradient projection algorithm;OFDM subcarriers;minimum mean square error;MMSE;BC;MIMO multiple access channel;MAC;alternating optimization;AO;Receivers;MIMO;OFDM;Minimization;Quality of service;Signal processing algorithms;Optimization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924971.pdf},\n}\n\n
\n
\n\n\n
\n This work addresses the design of linear precoders and receivers in multiuser Multiple-Input Multiple-Output (MIMO) downlink channels using Orthogonal Frequency Division Multiplexing (OFDM) modulation when only partial Channel State Information (CSI) is available at the transmitter. Our aim is to minimize the total transmit power subject to per-user Quality-of-Service (QoS) constraints expressed as per-user rates. We propose a gradient-projection algorithm to optimally distribute the per-user rates among the OFDM subcarriers. Then, another algorithm is used to obtain the per-subcarrier precoders and receivers that minimize the overall transmit power. Based on the Minimum Mean Square Error (MMSE) duality between the MIMO Broadcast Channel (BC) and the MIMO Multiple Access Channel (MAC), both algorithms perform an Alternating Optimization (AO).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low complexity multiuser MIMO scheduling for weighted sum rate maximization.\n \n \n \n \n\n\n \n Venkatraman, G.; Tölli, A.; Janhunen, J.; and Juntti, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 820-824, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"LowPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952263,\n  author = {G. Venkatraman and A. Tölli and J. Janhunen and M. Juntti},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Low complexity multiuser MIMO scheduling for weighted sum rate maximization},\n  year = {2014},\n  pages = {820-824},\n  abstract = {The paper addresses user scheduling schemes for the multiuser multiple-input multiple-output (MU-MIMO) transmission with the objective of sum rate maximization (SRM) and the weighted counterpart in a single cell scenario. We propose a low complex product of independent projection displacements (PIPD) scheduling scheme, which performs the user selection for the MU-MIMO system with significantly lower complexity in comparison with the existing successive projections (SP) based designs. The PIPD scheme uses series of independent vector projections to evaluate the decision metrics. In addition, we also propose a heuristic algorithm of weighted scheduling, addressing the weighted sum rate maximization (WSRM) objective, which can be used with any scheduling algorithm. The performance of the weighted scheduling schemes are studied with the objective of minimizing the queues.},\n  keywords = {MIMO communication;minimisation;multiuser channels;queueing theory;scheduling;vectors;low complexity multiuser MIMO scheduling;weighted sum rate maximization;user scheduling schemes;multiuser multiple-input multiple-output transmission;MU-MIMO system;product-of-independent projection displacements scheduling scheme;PIPD scheduling scheme;user selection;independent vector projections;decision metrics;heuristic algorithm;queue minimization;radio access technologies;Measurement;Vectors;Complexity theory;Null space;Scheduling;MIMO;Scheduling algorithms},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922773.pdf},\n}\n\n
\n
\n\n\n
\n The paper addresses user scheduling schemes for the multiuser multiple-input multiple-output (MU-MIMO) transmission with the objective of sum rate maximization (SRM) and the weighted counterpart in a single cell scenario. We propose a low complex product of independent projection displacements (PIPD) scheduling scheme, which performs the user selection for the MU-MIMO system with significantly lower complexity in comparison with the existing successive projections (SP) based designs. The PIPD scheme uses series of independent vector projections to evaluate the decision metrics. In addition, we also propose a heuristic algorithm of weighted scheduling, addressing the weighted sum rate maximization (WSRM) objective, which can be used with any scheduling algorithm. The performance of the weighted scheduling schemes are studied with the objective of minimizing the queues.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the use of Zero Padding with discrete cosine transform Type-II in multicarrier communications.\n \n \n \n \n\n\n \n Domínguez-Jiménez, M. E.; Sansigre-Vidal, G.; and Cruz-Roldán, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 825-829, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952264,\n  author = {M. E. Domínguez-Jiménez and G. Sansigre-Vidal and F. Cruz-Roldán},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {On the use of Zero Padding with discrete cosine transform Type-II in multicarrier communications},\n  year = {2014},\n  pages = {825-829},\n  abstract = {In this work, the problem of applying Zero Padding (ZP) as redundancy in multicarrier communications is addressed. To this goal, a general matrix formulation to recover the trans- mitted symbol when ZP is used, is provided for any kind of discrete transform employed at both the transmitter and the receiver. The obtained result not only generalizes some previously reported techniques, such as discrete Fourier transform-based transceivers, but it also allows to extend it to other kind of transforms (e.g., discrete trigonometric transforms). As a particular case study, the use of discrete cosine transform Type-II even (DCT2e) is analyzed. In this case, a simple structure that recover the transmitted symbol at the receiver is also shown. Additionally, the expressions of the one-tap per subcarrier coefficients, also using the DCT2e, are derived.},\n  keywords = {discrete cosine transforms;discrete Fourier transforms;matrix algebra;receivers;transmitters;zero padding;discrete cosine transform;multicarrier communications;ZP;general matrix formulation;transmitted symbol;transmitter;receiver;discrete Fourier transform;discrete trigonometric transforms;subcarrier coefficients;Discrete cosine transforms;Receivers;OFDM;Mirrors;Discrete Fourier transforms;Vectors;Multicarrier Modulation (MCM);Zero padding (ZP);Discrete Fourier Transform (DFT);Discrete Cosine Transform (DCT);Orthogonal Frequency-Division Multiplexing (OFDM)},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569910487.pdf},\n}\n\n
\n
\n\n\n
\n In this work, the problem of applying Zero Padding (ZP) as redundancy in multicarrier communications is addressed. To this goal, a general matrix formulation to recover the trans- mitted symbol when ZP is used, is provided for any kind of discrete transform employed at both the transmitter and the receiver. The obtained result not only generalizes some previously reported techniques, such as discrete Fourier transform-based transceivers, but it also allows to extend it to other kind of transforms (e.g., discrete trigonometric transforms). As a particular case study, the use of discrete cosine transform Type-II even (DCT2e) is analyzed. In this case, a simple structure that recover the transmitted symbol at the receiver is also shown. Additionally, the expressions of the one-tap per subcarrier coefficients, also using the DCT2e, are derived.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rate-adaptive secure HARQ protocol for block-fading channels.\n \n \n \n \n\n\n \n Mheich, Z.; Le Treust, M.; Alberge, F.; Duhamel, P.; and Szczecinski, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 830-834, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Rate-adaptivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952265,\n  author = {Z. Mheich and M. {Le Treust} and F. Alberge and P. Duhamel and L. Szczecinski},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Rate-adaptive secure HARQ protocol for block-fading channels},\n  year = {2014},\n  pages = {830-834},\n  abstract = {This paper analyzes the achievable secrecy throughput in incremental redundancy secure HARQ protocols for communication over block-fading wiretap channels (WTC). The transmitter has no instantaneous channel state information (CSI) but can receive an outdated version of CSI from both legitimate receiver and eavesdropper through reliable multi-bit feedback channels. Using outdated CSI, the transmitter can adapt the coding rates. Since the transmitter cannot adapt the coding rates to the instantaneous channel conditions, we consider the outage performance of secure HARQ protocols. We show how to find the optimal rate-adaptation policies to maximize the secrecy throughput under constraints on outage probabilities. Numerical results for a Rayleigh-fading WTC show that the rate-adaptation using multilevel feedbacks provides important gains in secrecy throughput comparing to the non-adaptive model. The fact that the eavesdropper also feedbacks information may seem unrealistic, but obtained results can be understood as an upper limit of the possible secrecy throughput improvements.},\n  keywords = {automatic repeat request;channel coding;Rayleigh channels;telecommunication security;incremental redundancy;rate-adaptive secure HARQ protocol;secrecy throughput;hybrid automatic repeat request;eavesdropper;multilevel feedback;Rayleigh-fading WTC;optimal rate-adaptation policy;coding rates;transmitter;multibit feedback channels;channel state information;wiretap channels;block-fading channels;Throughput;Transmitters;Receivers;Protocols;Decoding;Redundancy;Encoding;HARQ;incremental redundancy;block fading;information-theoretic secrecy;rate adaptation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926603.pdf},\n}\n\n
\n
\n\n\n
\n This paper analyzes the achievable secrecy throughput in incremental redundancy secure HARQ protocols for communication over block-fading wiretap channels (WTC). The transmitter has no instantaneous channel state information (CSI) but can receive an outdated version of CSI from both legitimate receiver and eavesdropper through reliable multi-bit feedback channels. Using outdated CSI, the transmitter can adapt the coding rates. Since the transmitter cannot adapt the coding rates to the instantaneous channel conditions, we consider the outage performance of secure HARQ protocols. We show how to find the optimal rate-adaptation policies to maximize the secrecy throughput under constraints on outage probabilities. Numerical results for a Rayleigh-fading WTC show that the rate-adaptation using multilevel feedbacks provides important gains in secrecy throughput comparing to the non-adaptive model. The fact that the eavesdropper also feedbacks information may seem unrealistic, but obtained results can be understood as an upper limit of the possible secrecy throughput improvements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Embedded cross-decoding scheme for multiple description based distributed source coding.\n \n \n \n \n\n\n \n Ceulemans, B.; Satti, S. M.; Deligiannis, N.; Verbist, F.; and Munteanu, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 835-839, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EmbeddedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952266,\n  author = {B. Ceulemans and S. M. Satti and N. Deligiannis and F. Verbist and A. Munteanu},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Embedded cross-decoding scheme for multiple description based distributed source coding},\n  year = {2014},\n  pages = {835-839},\n  abstract = {Using multiple description (MD) coding mechanisms, this paper proposes a novel coding framework for error-resilience in distributed source coding (DSC) in sensor networks. In particular, scalable source descriptions are first generated using a symmetric scalable MD scalar quantizer. These descriptions are then layered Wyner-Ziv (WZ) coded using low-density parity-check accumulate (LDPCA) -based syndrome binning. The decoder consists of two side decoders which attempt to iteratively decode their respective description at various LDPCA puncturing rates in the presence of a correlated side information. A central decoder exploits the inter-description correlation to further enhance the WZ rate-distortion performance when both descriptions are partially or fully received. In contrast to earlier work, our proposed decoding scheme also exploits the correlation that exists between bit-planes. Experimental simulations reveal that, for a Gaussian source, the proposed system yields a performance improvement of roughly 0.66 dB when compared to not exploiting inter-description correlations.},\n  keywords = {codecs;decoding;parity check codes;source coding;embedded cross-decoding scheme;multiple description based distributed source coding;multiple description coding mechanisms;coding framework;error-resilience;distributed source coding;sensor networks;symmetric scalable MD scalar quantizer;Wyner-Ziv coded;low-density parity-check accumulate;LDPCA-based syndrome binning;decoder;LDPCA puncturing rates;correlated side information;central decoder;inter-description correlation;WZ rate-distortion performance;Signal to noise ratio;Correlation;Source coding;Maximum likelihood decoding;Iterative decoding;multiple description coding;distributed source coding;cross-decoding;layered Wyner-Ziv coding},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917133.pdf},\n}\n\n
\n
\n\n\n
\n Using multiple description (MD) coding mechanisms, this paper proposes a novel coding framework for error-resilience in distributed source coding (DSC) in sensor networks. In particular, scalable source descriptions are first generated using a symmetric scalable MD scalar quantizer. These descriptions are then layered Wyner-Ziv (WZ) coded using low-density parity-check accumulate (LDPCA) -based syndrome binning. The decoder consists of two side decoders which attempt to iteratively decode their respective description at various LDPCA puncturing rates in the presence of a correlated side information. A central decoder exploits the inter-description correlation to further enhance the WZ rate-distortion performance when both descriptions are partially or fully received. In contrast to earlier work, our proposed decoding scheme also exploits the correlation that exists between bit-planes. Experimental simulations reveal that, for a Gaussian source, the proposed system yields a performance improvement of roughly 0.66 dB when compared to not exploiting inter-description correlations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Signal processing applications for cognitive networks: State of the art.\n \n \n \n \n\n\n \n de Carvalho , F. B. S.; Sousa, M. P.; Filho, J. V. S.; Rocha, J. S.; Lopes, W. T. A.; and Alencar, M. S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 840-844, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SignalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952267,\n  author = {F. B. S. {de Carvalho} and M. P. Sousa and J. V. S. Filho and J. S. Rocha and W. T. A. Lopes and M. S. Alencar},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Signal processing applications for cognitive networks: State of the art},\n  year = {2014},\n  pages = {840-844},\n  abstract = {Cognitive radio is one of the most promising techniques of wireless communications, due to its many applications. Cognitive networks have the capability to congregate different cognitive users via cooperative spectrum sensing. Examples of cognitive networks can be found in important and different applications, such as digital television and wireless sensor networks. The objective of this paper is to analyze how signal processing techniques are used to provide reliable performance in such networks. Applications of signal processing in cognitive networks are presented and detailed.},\n  keywords = {cognitive radio;cooperative communication;digital television;radio spectrum management;signal detection;signal processing;telecommunication network reliability;wireless sensor networks;signal processing applications;cognitive networks;cognitive radio;wireless communications;cooperative spectrum sensing;digital television;wireless sensor networks;performance reliability;Sensors;Cognitive radio;Wireless sensor networks;Standards;Smart grids;Signal processing;Cognitive Radio;Signal Processing;Spectrum Sensing;Cognitive Networks},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926731.pdf},\n}\n\n
\n
\n\n\n
\n Cognitive radio is one of the most promising techniques of wireless communications, due to its many applications. Cognitive networks have the capability to congregate different cognitive users via cooperative spectrum sensing. Examples of cognitive networks can be found in important and different applications, such as digital television and wireless sensor networks. The objective of this paper is to analyze how signal processing techniques are used to provide reliable performance in such networks. Applications of signal processing in cognitive networks are presented and detailed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cognitive radio system with a two-user non-binary network-coded cooperative secondary network.\n \n \n \n \n\n\n \n Mafra, S. B.; Rayel, O. K.; Rebelatto, J. L.; and Souza, R. D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 845-849, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CognitivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952268,\n  author = {S. B. Mafra and O. K. Rayel and J. L. Rebelatto and R. D. Souza},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Cognitive radio system with a two-user non-binary network-coded cooperative secondary network},\n  year = {2014},\n  pages = {845-849},\n  abstract = {We investigate the performance of a network coding based secondary network in a cognitive radio system under spectrum sharing constraints. The secondary network is composed of two users that cooperate to transmit their information to a common secondary destination. The outage probability is analyzed under a given maximum interference constraint set by the primary network as well as to the maximum transmit power limit of the secondary users. Theoretical and numerical results show that the adequate use of network coding by the secondary network can provide significant gains in terms of outage probability and diversity order when compared to non cooperative or traditional cooperative techniques.},\n  keywords = {cognitive radio;cooperative communication;diversity reception;network coding;probability;radio spectrum management;radiofrequency interference;cognitive radio system;two-user nonbinary network-coded cooperative secondary network;spectrum sharing constraint;secondary destination;outage probability;maximum interference constraint set;primary network;maximum transmit power limit;secondary users;diversity order;noncooperative technique;traditional cooperative technique;Interference;Network coding;Receivers;Signal to noise ratio;Cognitive radio;Fading;Transmitters;Cognitive radio;cooperative communications;network coding;spectrum sharing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569911227.pdf},\n}\n\n
\n
\n\n\n
\n We investigate the performance of a network coding based secondary network in a cognitive radio system under spectrum sharing constraints. The secondary network is composed of two users that cooperate to transmit their information to a common secondary destination. The outage probability is analyzed under a given maximum interference constraint set by the primary network as well as to the maximum transmit power limit of the secondary users. Theoretical and numerical results show that the adequate use of network coding by the secondary network can provide significant gains in terms of outage probability and diversity order when compared to non cooperative or traditional cooperative techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A spectrum sensing algorithm based on statistic tests for cognitive networks subject to fading.\n \n \n \n \n\n\n \n de Carvalho , F. B. S.; Rocha, J. S.; Lopes, W. T. A.; and Alencar, M. S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 850-854, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952269,\n  author = {F. B. S. {de Carvalho} and J. S. Rocha and W. T. A. Lopes and M. S. Alencar},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A spectrum sensing algorithm based on statistic tests for cognitive networks subject to fading},\n  year = {2014},\n  pages = {850-854},\n  abstract = {Cognitive radio is a viable technology for the next generation of wireless communications. The ability to sense the electromagnetic spectrum and to enable vacant bands to other users has been investigated in the past years. One important issue is the use of an efficient spectrum sensing algorithm to monitor the frequency band occupancy. Usually, the effects of fading are overseen in the analysis of those algorithms. This paper aims to evaluate the performance of a spectrum sensing algorithm based on Jarque-Bera test. Rayleigh fading is considered in this paper. Preliminary simulation results are provided, to demonstrate the potential of the proposed strategy.},\n  keywords = {cognitive radio;fading channels;radio spectrum management;Rayleigh channels;signal detection;spectrum sensing algorithm;statistic tests;cognitive networks;cognitive radio;wireless communications;electromagnetic spectrum;frequency band occupancy;Rayleigh fading;Sensors;Rayleigh channels;Cognitive radio;AWGN channels;Gaussian distribution;Cognitive Radio;Signal Processing;Spectrum Sensing;Statistic Tests;Jarque-Bera Test;Rayleigh Fading Channel},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925333.pdf},\n}\n\n
\n
\n\n\n
\n Cognitive radio is a viable technology for the next generation of wireless communications. The ability to sense the electromagnetic spectrum and to enable vacant bands to other users has been investigated in the past years. One important issue is the use of an efficient spectrum sensing algorithm to monitor the frequency band occupancy. Usually, the effects of fading are overseen in the analysis of those algorithms. This paper aims to evaluate the performance of a spectrum sensing algorithm based on Jarque-Bera test. Rayleigh fading is considered in this paper. Preliminary simulation results are provided, to demonstrate the potential of the proposed strategy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed cognitive radio systems with temperature-interference constraints and overlay scheme.\n \n \n \n \n\n\n \n Zazo, J.; Zazo, S.; and Valcarcel Macua, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 855-859, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DistributedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952270,\n  author = {J. Zazo and S. Zazo and S. {Valcarcel Macua}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Distributed cognitive radio systems with temperature-interference constraints and overlay scheme},\n  year = {2014},\n  pages = {855-859},\n  abstract = {Cognitive radio represents a promising paradigm to further increase transmission rates in wireless networks, as well as to facilitate the deployment of self-organized networks such as femtocells. Within this framework, secondary users (SU) may exploit the channel under the premise to maintain the quality of service (QoS) on primary users (PU) above a certain level. To achieve this goal, we present a noncooperative game where SU maximize their transmission rates, and may act as well as relays of the PU in order to hold their perceived QoS above the given threshold. In the paper, we analyze the properties of the game within the theory of variational inequalities, and provide an algorithm that converges to one Nash Equilibrium of the game. Finally, we present some simulations and compare the algorithm with another method that does not consider SU acting as relays.},\n  keywords = {cognitive radio;game theory;quality of service;radiofrequency interference;distributed cognitive radio systems;temperature-interference constraints;overlay scheme;transmission rate;wireless networks;self-organized network deployment;femtocells;secondary users;SU;quality of service;QoS;primary users;PU relays;noncooperative game;transmission rate maximization;variational inequalities;Nash equilibrium;Games;Interference;Jacobian matrices;Quality of service;Cognitive radio;Gain;Relays;Cognitive radio;variational inequalities;game theory;self-organized networks;small cells},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925383.pdf},\n}\n\n
\n
\n\n\n
\n Cognitive radio represents a promising paradigm to further increase transmission rates in wireless networks, as well as to facilitate the deployment of self-organized networks such as femtocells. Within this framework, secondary users (SU) may exploit the channel under the premise to maintain the quality of service (QoS) on primary users (PU) above a certain level. To achieve this goal, we present a noncooperative game where SU maximize their transmission rates, and may act as well as relays of the PU in order to hold their perceived QoS above the given threshold. In the paper, we analyze the properties of the game within the theory of variational inequalities, and provide an algorithm that converges to one Nash Equilibrium of the game. Finally, we present some simulations and compare the algorithm with another method that does not consider SU acting as relays.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Voice segmentation system based on energy estimation.\n \n \n \n \n\n\n \n Rocha, R. B.; Freire, V. V.; and Alencar, M. S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 860-864, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"VoicePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952271,\n  author = {R. B. Rocha and V. V. Freire and M. S. Alencar},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Voice segmentation system based on energy estimation},\n  year = {2014},\n  pages = {860-864},\n  abstract = {Voice segmentation is used in speech recognition and system synthesis, as well as in phonetic voice encoders. This paper describes an implicit speech segmentation system, which aims to estimate the boundaries between phonemes in a locution. To find the segmentation marks, the proposed method initially locates reference borders between silent periods and phonemes, and vice versa measuring energy in short duration periods. The phonetic boundaries are found by means of energy encoding in the region delimited by the reference marks, which were initially detected. To evaluate the performance of the proposed system, an objective evaluation using 50 locutions was performed. The system detected 72.41% of the segmentation marks, in which, 77.6% were detected with an error less or equal to 10 ms and 22.4% of the boundaries were found with an error between 10 and 20 ms.},\n  keywords = {speech recognition;speech synthesis;voice segmentation system;energy estimation;speech recognition;speech system synthesis;phonetic voice encoders;implicit speech segmentation system;phonemes;silent periods;Speech;Speech recognition;Hidden Markov models;Acoustics;Databases;Speech processing;Manuals;Voice segmentation;energy detection;objective evaluation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925339.pdf},\n}\n\n
\n
\n\n\n
\n Voice segmentation is used in speech recognition and system synthesis, as well as in phonetic voice encoders. This paper describes an implicit speech segmentation system, which aims to estimate the boundaries between phonemes in a locution. To find the segmentation marks, the proposed method initially locates reference borders between silent periods and phonemes, and vice versa measuring energy in short duration periods. The phonetic boundaries are found by means of energy encoding in the region delimited by the reference marks, which were initially detected. To evaluate the performance of the proposed system, an objective evaluation using 50 locutions was performed. The system detected 72.41% of the segmentation marks, in which, 77.6% were detected with an error less or equal to 10 ms and 22.4% of the boundaries were found with an error between 10 and 20 ms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed parameter estimation with exponential family statistics: Asymptotic efficiency.\n \n \n \n \n\n\n \n Kar, S.; and Moura, J. M. F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 865-869, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DistributedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952272,\n  author = {S. Kar and J. M. F. Moura},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Distributed parameter estimation with exponential family statistics: Asymptotic efficiency},\n  year = {2014},\n  pages = {865-869},\n  abstract = {This paper studies the problem of distributed parameter estimation in multi-agent networks with exponential family observation statistics. Conforming to a given inter-agent communication topology, a distributed recursive estimator of the consensus-plus-innovations type is presented in which at every observation sampling epoch the network agents exchange a single round of messages with their communication neighbors and recursively update their local parameter estimates by simultaneously processing the received neighborhood data and the new information (innovation) embedded in the observation sample. Under global observability of the networked sensing model and mean connectivity of the inter-agent communication network, the proposed estimator is shown to yield consistent parameter estimates at each network agent. Furthermore, it is shown that the distributed estimator is asymptotically efficient, in that, the asymptotic covariances of the agent estimates coincide with that of the optimal centralized estimator, i.e., the inverse of the centralized Fisher information rate.},\n  keywords = {directed graphs;multi-agent systems;network theory (graphs);recursive estimation;statistical analysis;asymptotic efficiency;distributed parameter estimation problem;multiagent networks;exponential family observation statistics;inter-agent communication topology;distributed recursive estimator;consensus-plus-innovations type;observation sampling epoch;communication neighbors;received neighborhood data processing;global observability;networked sensing model;inter-agent communication network;asymptotic covariances;optimal centralized estimator;centralized Fisher information rate;Estimation;Sensors;Parameter estimation;Technological innovation;Stochastic processes;Observability;Optimization;Multi-agent networks;distributed estimation;exponential family;collaborative network processing;consensus;stochastic aproximation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926713.pdf},\n}\n\n
\n
\n\n\n
\n This paper studies the problem of distributed parameter estimation in multi-agent networks with exponential family observation statistics. Conforming to a given inter-agent communication topology, a distributed recursive estimator of the consensus-plus-innovations type is presented in which at every observation sampling epoch the network agents exchange a single round of messages with their communication neighbors and recursively update their local parameter estimates by simultaneously processing the received neighborhood data and the new information (innovation) embedded in the observation sample. Under global observability of the networked sensing model and mean connectivity of the inter-agent communication network, the proposed estimator is shown to yield consistent parameter estimates at each network agent. Furthermore, it is shown that the distributed estimator is asymptotically efficient, in that, the asymptotic covariances of the agent estimates coincide with that of the optimal centralized estimator, i.e., the inverse of the centralized Fisher information rate.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nearest-neighbor estimation in sensor networks.\n \n \n \n \n\n\n \n Marano, S.; Matta, V.; and Willett, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 870-874, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Nearest-neighborPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952293,\n  author = {S. Marano and V. Matta and P. Willett},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Nearest-neighbor estimation in sensor networks},\n  year = {2014},\n  pages = {870-874},\n  abstract = {This contribution reviews some recent advances in the field of nearest-neighbor (NN) nonparametric estimation in sensor networks. Upon observing X0, the problem is to estimate the corresponding response variable Y0 by using the knowledge contained in a training set {(Xi, Yi)}in=1, made of n independent copies of (X0, Y0). In the distributed version of the problem, a network made of spatially distributed sensors and a common fusion center (FC) is considered. As X0 is made available at the FC, it is broadcast to all the sensors. Relying upon the locally available pair (Xi, Yi) and upon X0, sensor i sends a message containing Yi to the FC, or stays silent: only the few most informative response variables {Yi} should be sent, but no inter-sensor coordination is allowed. The analysis is asymptotic in the limit of large network size n and we show that, by means of a suitable ordered transmission policy, only a vanishing fraction of NN messages can be selected, yet preserving the consistency of the estimation even under communication constraints.},\n  keywords = {wireless sensor networks;sensor networks;nearest-neighbor nonparametric estimation;training set;distributed version;spatially-distributed sensors;common fusion center;intersensor coordination;network size;NN messages;communication constraint;Estimation;Random variables;Artificial neural networks;Training;Channel estimation;Noise measurement;Quantization (signal);Nearest Neighbor;Nonparametric Regression;Ordered Transmissions;Sensor Networks},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569910459.pdf},\n}\n\n
\n
\n\n\n
\n This contribution reviews some recent advances in the field of nearest-neighbor (NN) nonparametric estimation in sensor networks. Upon observing X0, the problem is to estimate the corresponding response variable Y0 by using the knowledge contained in a training set (Xi, Yi)in=1, made of n independent copies of (X0, Y0). In the distributed version of the problem, a network made of spatially distributed sensors and a common fusion center (FC) is considered. As X0 is made available at the FC, it is broadcast to all the sensors. Relying upon the locally available pair (Xi, Yi) and upon X0, sensor i sends a message containing Yi to the FC, or stays silent: only the few most informative response variables Yi should be sent, but no inter-sensor coordination is allowed. The analysis is asymptotic in the limit of large network size n and we show that, by means of a suitable ordered transmission policy, only a vanishing fraction of NN messages can be selected, yet preserving the consistency of the estimation even under communication constraints.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On-line detection and estimation of gaseous point sources using sensor networks.\n \n \n \n \n\n\n \n Agostinho, S.; and Gomes, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 875-879, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"On-linePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952294,\n  author = {S. Agostinho and J. Gomes},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {On-line detection and estimation of gaseous point sources using sensor networks},\n  year = {2014},\n  pages = {875-879},\n  abstract = {The current work tackles the detection and localization of a diffusive point source, based on spatially distributed concentration measurements acquired through a sensor network. A model-based strategy is used, where the concentration field is modeled as a diffusive and advective-diffusive semi-infinite environment. We rely on hypothesis testing for source detection and maximum likelihood estimation for inference of the unknown parameters, providing Cramér-Rao Lower Bounds as benchmark. The (non-convex and multimodal) likelihood function is maximized through a Newton-Conjugate Gradient method, with an applied convex relaxation under steady-state assumptions to provide a suitable source position initialization. Detection is carried out resorting to a Generalized Likelihood Ratio Test. The framework's robustness is validated against a numerically simulated environment generated by the Toolbox of Level Set Methods, which provides data (loosely) consistent with the model.},\n  keywords = {distributed sensors;gradient methods;maximum likelihood estimation;Newton method;signal detection;source separation;statistical testing;online detection;gaseous point source estimation;sensor networks;diffusive point source localization;diffusive point source detection;spatially distributed concentration measurements;model-based strategy;concentration field;advective-diffusive semiinfinite environment;hypothesis testing;source detection;maximum likelihood estimation;unknown parameter inference;Cramér-Rao lower bounds;multimodal likelihood function;nonconvex likelihood function;convex relaxation;Newton-conjugate gradient method;steady-state assumptions;source position initialization;generalized likelihood ratio test;level set methods;Maximum likelihood estimation;Mathematical model;Vectors;Equations;Optimization;Position measurement;Diffusive Source Localization;Maximum Likelihood Estimator;Newton-Conjugate Gradient;Generalized Likelihood Ratio Test;Sensor Network},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925551.pdf},\n}\n\n
\n
\n\n\n
\n The current work tackles the detection and localization of a diffusive point source, based on spatially distributed concentration measurements acquired through a sensor network. A model-based strategy is used, where the concentration field is modeled as a diffusive and advective-diffusive semi-infinite environment. We rely on hypothesis testing for source detection and maximum likelihood estimation for inference of the unknown parameters, providing Cramér-Rao Lower Bounds as benchmark. The (non-convex and multimodal) likelihood function is maximized through a Newton-Conjugate Gradient method, with an applied convex relaxation under steady-state assumptions to provide a suitable source position initialization. Detection is carried out resorting to a Generalized Likelihood Ratio Test. The framework's robustness is validated against a numerically simulated environment generated by the Toolbox of Level Set Methods, which provides data (loosely) consistent with the model.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Near-optimal sensor placement for signals lying in a union of subspaces.\n \n \n \n \n\n\n \n Badawy, D. E.; Ranieri, J.; and Vetterli, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 880-884, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Near-optimalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952295,\n  author = {D. E. Badawy and J. Ranieri and M. Vetterli},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Near-optimal sensor placement for signals lying in a union of subspaces},\n  year = {2014},\n  pages = {880-884},\n  abstract = {Sensor networks are commonly deployed to measure data from the environment and accurately estimate certain parameters. However, the number of deployed sensors is often limited by several constraints, such as their cost. Therefore, their locations must be opportunely optimized to enhance the estimation of the parameters. In a previous work, we considered a low-dimensional linear model for the measured data and proposed a near-optimal algorithm to optimize the sensor placement. In this paper, we propose to model the data as a union of subspaces to further reduce the amount of sensors without degrading the quality of the estimation. Moreover, we introduce a greedy algorithm for the sensor placement for such a model and show the near-optimality of its solution. Finally, we verify with numerical experiments the advantage of the proposed model in reducing the number of sensors while maintaining intact the estimation performance.},\n  keywords = {estimation theory;greedy algorithms;parameter estimation;sensor placement;wireless sensor networks;near-optimal sensor placement;deployed sensors;parameter estimation;low-dimensional linear model;estimation quality;greedy algorithm;Inverse problems;Numerical models;Estimation;Cost function;Greedy algorithms;Signal processing algorithms;Approximation algorithms;Sensor placement;union of subspaces;frame potential},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925549.pdf},\n}\n\n
\n
\n\n\n
\n Sensor networks are commonly deployed to measure data from the environment and accurately estimate certain parameters. However, the number of deployed sensors is often limited by several constraints, such as their cost. Therefore, their locations must be opportunely optimized to enhance the estimation of the parameters. In a previous work, we considered a low-dimensional linear model for the measured data and proposed a near-optimal algorithm to optimize the sensor placement. In this paper, we propose to model the data as a union of subspaces to further reduce the amount of sensors without degrading the quality of the estimation. Moreover, we introduce a greedy algorithm for the sensor placement for such a model and show the near-optimality of its solution. Finally, we verify with numerical experiments the advantage of the proposed model in reducing the number of sensors while maintaining intact the estimation performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reconstructing diffusion fields sampled with a network of arbitrarily distributed sensors.\n \n \n \n \n\n\n \n Murray-Bruce, J.; and Dragotti, P. L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 885-889, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ReconstructingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952296,\n  author = {J. Murray-Bruce and P. L. Dragotti},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Reconstructing diffusion fields sampled with a network of arbitrarily distributed sensors},\n  year = {2014},\n  pages = {885-889},\n  abstract = {Sensor networks are becoming increasingly prevalent for monitoring physical phenomena of interest. For such wireless sensor network applications, knowledge of node location is important. Although a uniform sensor distribution is common in the literature, it is normally difficult to achieve in reality. Thus we propose a robust algorithm for reconstructing two-dimensional diffusion fields, sampled with a network of arbitrarily placed sensors. The two-step method proposed here is based on source parameter estimation: in the first step, by properly combining the field sensed through well-chosen test functions, we show how Prony's method can reveal locations and intensities of the sources inducing the field. The second step then uses a modification of the Cauchy-Schwarz inequality to estimate the activation time in the single source field. We combine these steps to give a multi-source field estimation algorithm and carry out extensive numerical simulations to evaluate its performance.},\n  keywords = {wireless sensor networks;arbitrarily-distributed sensor network;physical phenomena monitoring;wireless sensor network application;node location;uniform sensor distribution;two-dimensional diffusion field reconstruction;arbitrarily-placed sensor network;source parameter estimation;well-chosen test functions;Prony method;Cauchy-Schwarz inequality;activation time estimation;multisource field estimation algorithm;Estimation;Equations;Mathematical model;Noise;Noise measurement;Sensor phenomena and characterization;Spatio-temporal sampling;sensor networks;diffusion process;reciprocity gap;Prony's method},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922357.pdf},\n}\n\n
\n
\n\n\n
\n Sensor networks are becoming increasingly prevalent for monitoring physical phenomena of interest. For such wireless sensor network applications, knowledge of node location is important. Although a uniform sensor distribution is common in the literature, it is normally difficult to achieve in reality. Thus we propose a robust algorithm for reconstructing two-dimensional diffusion fields, sampled with a network of arbitrarily placed sensors. The two-step method proposed here is based on source parameter estimation: in the first step, by properly combining the field sensed through well-chosen test functions, we show how Prony's method can reveal locations and intensities of the sources inducing the field. The second step then uses a modification of the Cauchy-Schwarz inequality to estimate the activation time in the single source field. We combine these steps to give a multi-source field estimation algorithm and carry out extensive numerical simulations to evaluate its performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dynamic range reduction of audio signals using multiple allpass filters on a GPU accelerator.\n \n \n \n \n\n\n \n Belloch, J. A.; Parker, J.; Savioja, L.; Gonzalez, A.; and Välimäki, V.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 890-894, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DynamicPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952297,\n  author = {J. A. Belloch and J. Parker and L. Savioja and A. Gonzalez and V. Välimäki},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Dynamic range reduction of audio signals using multiple allpass filters on a GPU accelerator},\n  year = {2014},\n  pages = {890-894},\n  abstract = {Maximising loudness of audio signals by restricting their dynamic range has become an important issue in audio signal processing. Previous works indicate that an allpass filter chain can reduce the peak amplitude of an audio signal, without introducing the distortion associated with traditional non-linear techniques. Because of large search space and the consequential demand of the computational needs, the previous work selected randomly the delay-line lengths and fixed the filter coefficient values. In this work, we run on a GPU accelerator multiple allpass filter chains in parallel that cover all relevant delay-line lengths and perform a wide search on possible coefficient values in order to get closer to the optimal choice. Our most exhaustive method, which tests about 29 million parameter combinations, reduced the amplitude of test signals by 23% to 31%, whereas the previous work could only achieve a reduction of 23% at best.},\n  keywords = {all-pass filters;audio signal processing;graphics processing units;dynamic range reduction;audio signals;allpass filters;GPU accelerator;audio signal processing;Graphics processing units;Dynamic range;System-on-chip;Instruction sets;Delay lines;Multicore processing;Signal processing;Audio systems;digital filters;parallel architectures;parallel processing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569918017.pdf},\n}\n\n
\n
\n\n\n
\n Maximising loudness of audio signals by restricting their dynamic range has become an important issue in audio signal processing. Previous works indicate that an allpass filter chain can reduce the peak amplitude of an audio signal, without introducing the distortion associated with traditional non-linear techniques. Because of large search space and the consequential demand of the computational needs, the previous work selected randomly the delay-line lengths and fixed the filter coefficient values. In this work, we run on a GPU accelerator multiple allpass filter chains in parallel that cover all relevant delay-line lengths and perform a wide search on possible coefficient values in order to get closer to the optimal choice. Our most exhaustive method, which tests about 29 million parameter combinations, reduced the amplitude of test signals by 23% to 31%, whereas the previous work could only achieve a reduction of 23% at best.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Near-field localization of audio: A maximum likelihood approach.\n \n \n \n \n\n\n \n Jensen, J. R.; and Christensen, M. G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 895-899, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Near-fieldPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952298,\n  author = {J. R. Jensen and M. G. Christensen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Near-field localization of audio: A maximum likelihood approach},\n  year = {2014},\n  pages = {895-899},\n  abstract = {Localization of audio sources using microphone arrays has been an important research problem for more than two decades. Many traditional methods for solving the problem are based on a two-stage procedure: first, information about the audio source, such as time differences-of-arrival (TDOAs) and gain ratios-of-arrival (GROAs) between microphones is estimated, and, second, this knowledge is used to localize the audio source. These methods often have a low computational complexity, but this comes at the cost of a limited estimation accuracy. Therefore, we propose a new localization approach, where the desired signal is modeled using TDOAs and GROAs, which are determined by the source location. This facilitates the derivation of one-stage, maximum likelihood methods under a white Gaussian noise assumption that is applicable in both near- and far-field scenarios. Simulations show that the proposed method is statistically efficient and outperforms state-of-the-art estimators in most scenarios, involving both synthetic and real data.},\n  keywords = {audio signal processing;computational complexity;Gaussian noise;maximum likelihood estimation;microphone arrays;time-of-arrival estimation;state-of-the-art estimators;white Gaussian noise;computational complexity;gain ratios-of-arrival;time differences-of-arrival;microphone arrays;audio sources localization;maximum likelihood;audio near-field localization;Microphones;Direction-of-arrival estimation;Noise;Speech;Harmonic analysis;Maximum likelihood estimation;Audio localization;microphone array;maximum likelihood;near-field;time difference-of-arrival;gain ratio-of-arrival},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921259.pdf},\n}\n\n
\n
\n\n\n
\n Localization of audio sources using microphone arrays has been an important research problem for more than two decades. Many traditional methods for solving the problem are based on a two-stage procedure: first, information about the audio source, such as time differences-of-arrival (TDOAs) and gain ratios-of-arrival (GROAs) between microphones is estimated, and, second, this knowledge is used to localize the audio source. These methods often have a low computational complexity, but this comes at the cost of a limited estimation accuracy. Therefore, we propose a new localization approach, where the desired signal is modeled using TDOAs and GROAs, which are determined by the source location. This facilitates the derivation of one-stage, maximum likelihood methods under a white Gaussian noise assumption that is applicable in both near- and far-field scenarios. Simulations show that the proposed method is statistically efficient and outperforms state-of-the-art estimators in most scenarios, involving both synthetic and real data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n DOA and pitch estimation of audio sources using IAA-based filtering.\n \n \n \n \n\n\n \n Jensen, J. R.; and Christensen, M. G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 900-904, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DOAPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952299,\n  author = {J. R. Jensen and M. G. Christensen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {DOA and pitch estimation of audio sources using IAA-based filtering},\n  year = {2014},\n  pages = {900-904},\n  abstract = {For decades, it has been investigated how to separately solve the problems of both direction-of-arrival (DOA) and pitch estimation. Recently, it was found that estimating these parameters jointly from multichannel recordings of audio can be extremely beneficial. Many joint estimators are based on knowledge of the inverse sample covariance matrix. Typically, this covariance is estimated using the sample covariance matrix, but for this estimate to be full rank, many temporal samples are needed. In cases with non-stationary signals, this is a serious limitation. We therefore investigate how a recent joint DOA and pitch filtering-based estimator can be combined with the iterative adaptive approach to circumvent this limitation in joint DOA and pitch estimation of audio sources. Simulations show a clear improvement compared to when using the sample covariance matrix and the considered approach also outperforms other state-of-the-art methods. Finally, the applicability of the considered approach is verified on real speech.},\n  keywords = {audio signal processing;covariance matrices;direction-of-arrival estimation;filtering theory;iterative methods;microphone arrays;DOA;pitch estimation;IAA-based filtering;direction-of-arrival estimation;multichannel recordings;inverse sample covariance matrix;filtering-based estimator;iterative adaptive approach;audio sources;covariance matrix;iterative adaptive approach;Direction-of-arrival estimation;Estimation;Joints;Covariance matrices;Microphones;Speech;Harmonic analysis;Direction-of-arrival;fundamental frequency;linearly constrained minimum variance;iterative adaptive approach;high resolution},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921261.pdf},\n}\n\n
\n
\n\n\n
\n For decades, it has been investigated how to separately solve the problems of both direction-of-arrival (DOA) and pitch estimation. Recently, it was found that estimating these parameters jointly from multichannel recordings of audio can be extremely beneficial. Many joint estimators are based on knowledge of the inverse sample covariance matrix. Typically, this covariance is estimated using the sample covariance matrix, but for this estimate to be full rank, many temporal samples are needed. In cases with non-stationary signals, this is a serious limitation. We therefore investigate how a recent joint DOA and pitch filtering-based estimator can be combined with the iterative adaptive approach to circumvent this limitation in joint DOA and pitch estimation of audio sources. Simulations show a clear improvement compared to when using the sample covariance matrix and the considered approach also outperforms other state-of-the-art methods. Finally, the applicability of the considered approach is verified on real speech.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detecting sound objects in audio recordings.\n \n \n \n \n\n\n \n Kumar, A.; Singh, R.; and Raj, B.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 905-909, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DetectingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952300,\n  author = {A. Kumar and R. Singh and B. Raj},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Detecting sound objects in audio recordings},\n  year = {2014},\n  pages = {905-909},\n  abstract = {In this paper we explore the idea of defining sound objects and how they may be detected. We try to define sound objects and demonstrate by our experiments the existence of these objects. Most of current works on acoustic event detection focus on detecting a finite set of audio events and the detection of a generic object in sound is not done. The major reason for proposing the idea of sound objects is to work with a generic sound concept instead of working with a small set of acoustic events for detection as is the norm. Our definition tries to conform to notions present in human auditory perception. Our experimental results are promising, and show that the idea of sound objects is worth pursuing and that it could give a new direction to semi-supervised or unsupervised learning of acoustic event detection mechanisms.},\n  keywords = {audio recording;detecting sound objects;audio recordings;acoustic event detection;audio events;finite set detection;generic object;generic sound concept;human auditory perception;unsupervised learning;semi-supervised learning;acoustic event detection mechanisms;Event detection;Vectors;Visualization;Detectors;Mel frequency cepstral coefficient;Computational modeling;Sound Objects;Acoustic Event Detection},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922657.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we explore the idea of defining sound objects and how they may be detected. We try to define sound objects and demonstrate by our experiments the existence of these objects. Most of current works on acoustic event detection focus on detecting a finite set of audio events and the detection of a generic object in sound is not done. The major reason for proposing the idea of sound objects is to work with a generic sound concept instead of working with a small set of acoustic events for detection as is the norm. Our definition tries to conform to notions present in human auditory perception. Our experimental results are promising, and show that the idea of sound objects is worth pursuing and that it could give a new direction to semi-supervised or unsupervised learning of acoustic event detection mechanisms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards fully uncalibrated room reconstruction with sound.\n \n \n \n \n\n\n \n Crocco, M.; Trucco, A.; Murino, V.; and Bue, A. D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 910-914, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952301,\n  author = {M. Crocco and A. Trucco and V. Murino and A. D. Bue},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Towards fully uncalibrated room reconstruction with sound},\n  year = {2014},\n  pages = {910-914},\n  abstract = {This paper presents a novel approach for room reconstruction using unknown sound signals generated in different locations of the environment. The approach is very general, that is fully uncalibrated, i.e. the locations of microphones, sound events and room reflectors are not known a priori. We show that, even if this problem implies a highly non-linear cost function, it is still possible to provide a solution close to the global minimum. Synthetic experiments show the proposed optimization framework can achieve reasonable results even in the presence of signal noise.},\n  keywords = {architectural acoustics;optimisation;optimization framework;signal noise;unknown sound signals;uncalibrated room reconstruction;Delays;Microphones;Cost function;Acoustics;Three-dimensional displays;Speech;Room reconstruction;microphone calibration;source localization;simulated annealing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922795.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a novel approach for room reconstruction using unknown sound signals generated in different locations of the environment. The approach is very general, that is fully uncalibrated, i.e. the locations of microphones, sound events and room reflectors are not known a priori. We show that, even if this problem implies a highly non-linear cost function, it is still possible to provide a solution close to the global minimum. Synthetic experiments show the proposed optimization framework can achieve reasonable results even in the presence of signal noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient representation of head-related transfer functions in subbands.\n \n \n \n \n\n\n \n Marelli, D.; Baumgartner, R.; and Majdak, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 915-919, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952302,\n  author = {D. Marelli and R. Baumgartner and P. Majdak},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient representation of head-related transfer functions in subbands},\n  year = {2014},\n  pages = {915-919},\n  abstract = {Head-related transfer functions (HRTFs) describe the acoustic filtering of incoming sounds by the human morphology. We propose three algorithms for representing HRTFs in subbands, i.e., as an analysis filterbank (FB) followed by a transfer matrix and a synthesis FB. These algorithms can be combined to achieve different design objectives. In the first algorithm, the choice of FBs is fixed, and a sparse approximation procedure minimizes the complexity of the transfer matrix associated to each HRTF. The other two algorithms jointly optimize the FBs and transfer matrices. The first variant aims at minimizing the complexity of the transfer matrices, while the second one does it for the FBs. Numerical experiments show that the proposed methods offer significant computational savings when compared with other available approaches.},\n  keywords = {acoustic filters;acoustic signal processing;approximation theory;channel bank filters;filtering theory;signal representation;signal synthesis;sparse matrices;transfer function matrices;head-related transfer function representation;HRTFs;acoustic filtering;human morphology;filter bank analysis;FB synthesis;transfer matrix;sparse approximation procedure;subbands;Algorithm design and analysis;Indexes;Complexity theory;Approximation methods;Signal processing algorithms;Approximation algorithms;Transfer functions;Head-related transfer functions;subband signal processing;sparse approximation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922973.pdf},\n}\n\n
\n
\n\n\n
\n Head-related transfer functions (HRTFs) describe the acoustic filtering of incoming sounds by the human morphology. We propose three algorithms for representing HRTFs in subbands, i.e., as an analysis filterbank (FB) followed by a transfer matrix and a synthesis FB. These algorithms can be combined to achieve different design objectives. In the first algorithm, the choice of FBs is fixed, and a sparse approximation procedure minimizes the complexity of the transfer matrix associated to each HRTF. The other two algorithms jointly optimize the FBs and transfer matrices. The first variant aims at minimizing the complexity of the transfer matrices, while the second one does it for the FBs. Numerical experiments show that the proposed methods offer significant computational savings when compared with other available approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An improved patchwork-based digital audio watermarking in CQT domain.\n \n \n \n \n\n\n \n Hu, P.; Yan, Q.; Dong, L.; and Liu, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 920-923, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952303,\n  author = {P. Hu and Q. Yan and L. Dong and M. Liu},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {An improved patchwork-based digital audio watermarking in CQT domain},\n  year = {2014},\n  pages = {920-923},\n  abstract = {In nowadays digital audio watermarking still remains one of the hot research topics in the view of multimedia copyright protection. In this paper an improved patchwork-based audio watermarking algorithm in Constant-Q Transform (CQT) domain has been proposed. The advantage of CQT in music analysis lies in its nonlinear frequency spacing. However the absence of exact invertible transform still prevents CQT from the wide application. In this paper it is overcome by frame pair selection, which is carefully performed according to frame pair energy ratios of the middle frequency range to avoid the disturbance of watermarking embedding and degradation in signal quality afterwards. Watermarks are then embedded by modifying the energy of selected frame pairs. The experimental results indicate that the proposed method outperforms the latest patchwork-based audio watermarking algorithm in Discrete Cosine Transform (DCT) domain with better signal quality of embedded signals and yet more robust to the conventional attacks.},\n  keywords = {audio watermarking;copy protection;copyright;discrete cosine transforms;CQT domain;patchwork-based digital audio watermarking;multimedia copyright protection;constant-Q transform domain;music analysis;nonlinear frequency spacing;middle frequency range;discrete cosine transform;Bit error rate;Watermarking;Abstracts;Out of order;Gold;Audio watermarking;CQT;ICQT;frame pairs;patchwork},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924521.pdf},\n}\n\n
\n
\n\n\n
\n In nowadays digital audio watermarking still remains one of the hot research topics in the view of multimedia copyright protection. In this paper an improved patchwork-based audio watermarking algorithm in Constant-Q Transform (CQT) domain has been proposed. The advantage of CQT in music analysis lies in its nonlinear frequency spacing. However the absence of exact invertible transform still prevents CQT from the wide application. In this paper it is overcome by frame pair selection, which is carefully performed according to frame pair energy ratios of the middle frequency range to avoid the disturbance of watermarking embedding and degradation in signal quality afterwards. Watermarks are then embedded by modifying the energy of selected frame pairs. The experimental results indicate that the proposed method outperforms the latest patchwork-based audio watermarking algorithm in Discrete Cosine Transform (DCT) domain with better signal quality of embedded signals and yet more robust to the conventional attacks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An analysis of the effect of larynx-synchronous averaging on dereverberation of voiced speech.\n \n \n \n \n\n\n \n Moore, A. H.; Naylor, P. A.; and Skoglund, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 924-928, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952304,\n  author = {A. H. Moore and P. A. Naylor and J. Skoglund},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {An analysis of the effect of larynx-synchronous averaging on dereverberation of voiced speech},\n  year = {2014},\n  pages = {924-928},\n  abstract = {The SMERSH algorithm is a physiologically-motivated approach to low-complexity speech dereverberation. It employs multichannel linear prediction to obtain a reverberant residual signal and subsequent larynx-synchronous temporal averaging to attenuate the reverberation during voiced speech. Experimental results suggest the method is successful but, to date, no detailed analysis of the theoretical basis of the larynx-synchronous averaging has been undertaken. In this paper the SMERSH algorithmis reviewed before focussing on the theoretical basis of its approach. We showthat the amount of dereverberation that can be achieved depends on the coherence of reverberation between frames. Simulations show that the extent of dereverberation increases with reverberation time and give an insight into the tradeoff between dereverberation and speech distortion.},\n  keywords = {linear predictive coding;reverberation;speech processing;speech distortion;reverberant residual signal;multichannel linear prediction;SMERSH algorithm;voiced speech dereverberation;larynx synchronous averaging;Speech;Larynx;Reverberation;Microphones;Signal to noise ratio;Estimation;Prediction algorithms;dereverberation;linear prediction},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925061.pdf},\n}\n\n
\n
\n\n\n
\n The SMERSH algorithm is a physiologically-motivated approach to low-complexity speech dereverberation. It employs multichannel linear prediction to obtain a reverberant residual signal and subsequent larynx-synchronous temporal averaging to attenuate the reverberation during voiced speech. Experimental results suggest the method is successful but, to date, no detailed analysis of the theoretical basis of the larynx-synchronous averaging has been undertaken. In this paper the SMERSH algorithmis reviewed before focussing on the theoretical basis of its approach. We showthat the amount of dereverberation that can be achieved depends on the coherence of reverberation between frames. Simulations show that the extent of dereverberation increases with reverberation time and give an insight into the tradeoff between dereverberation and speech distortion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Speech enhancement with a low-complexity online source number estimator using distributed arrays.\n \n \n \n \n\n\n \n Taseska, M.; Khan, A. H.; and Habets, E. A. P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 929-933, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SpeechPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952305,\n  author = {M. Taseska and A. H. Khan and E. A. P. Habets},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Speech enhancement with a low-complexity online source number estimator using distributed arrays},\n  year = {2014},\n  pages = {929-933},\n  abstract = {Enhancement of a desired speech signal in the presence of background noise and interferers is required in various modern communication systems. Existing multichannel techniques often require that the number of sources and their locations are known in advance, which makes them inapplicable in many practical situations. We propose a framework which uses the microphones of distributed arrays to enhance a desired speech signal by reducing background noise and an initially unknown number of interferers. The desired signal is extracted by a minimum variance distortionless response filter in dynamic scenarios where the number of active interferers is time-varying. An efficient, geometry-based approach that estimates the number of active interferers and their locations online is proposed. The overall performance is compared to the one of a geometry-based probabilistic framework for source extraction, recently proposed by the authors.},\n  keywords = {filtering theory;probability;speech enhancement;speech enhancement;low complexity online source number estimator;distributed arrays;speech signal;multichannel techniques;microphones;background noise;distortionless response filter;geometry based approach;probabilistic framework;Speech;Direction-of-arrival estimation;Noise;Estimation;Microphone arrays;Vectors;Source extraction;PSD matrix estimation;distributed arrays;number of sources},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925297.pdf},\n}\n\n
\n
\n\n\n
\n Enhancement of a desired speech signal in the presence of background noise and interferers is required in various modern communication systems. Existing multichannel techniques often require that the number of sources and their locations are known in advance, which makes them inapplicable in many practical situations. We propose a framework which uses the microphones of distributed arrays to enhance a desired speech signal by reducing background noise and an initially unknown number of interferers. The desired signal is extracted by a minimum variance distortionless response filter in dynamic scenarios where the number of active interferers is time-varying. An efficient, geometry-based approach that estimates the number of active interferers and their locations online is proposed. The overall performance is compared to the one of a geometry-based probabilistic framework for source extraction, recently proposed by the authors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spatio-temporal audio enhancement based on IAA noise covariance matrix estimates.\n \n \n \n \n\n\n \n Nørholm, S. M.; Jensen, J. R.; and Christensen, M. G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 934-938, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Spatio-temporalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952306,\n  author = {S. M. Nørholm and J. R. Jensen and M. G. Christensen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Spatio-temporal audio enhancement based on IAA noise covariance matrix estimates},\n  year = {2014},\n  pages = {934-938},\n  abstract = {A method for estimating the noise covariance matrix in a multichannel setup is proposed. The method is based on the iterative adaptive approach (IAA), which only needs short segments of data to estimate the covariance matrix. Therefore, the method can be used for fast varying signals. The method is based on an assumption of the desired signal being harmonic, which is used for estimating the noise covariance matrix from the covariance matrix of the observed signal. The noise covariance estimate is used in the linearly constrained minimum variance (LCMV) filter and compared to an amplitude and phase estimation (APES) based filter. For a fixed number of samples, the performance in terms of signal-to-noise ratio can be increased by using the IAA method, whereas if the filter size is fixed and the number of samples in the APES based filter is increased, the APES based filter performs better.},\n  keywords = {adaptive estimation;adaptive filters;audio signal processing;covariance matrices;iterative methods;phase estimation;signal-to-noise ratio;APES based filter;amplitude and phase estimation;LCMV filter;linearly constrained minimum variance filter;fast varying signals;iterative adaptive approach;IAA noise covariance matrix estimation;spatio-temporal audio enhancement;Covariance matrices;Harmonic analysis;Speech;Signal to noise ratio;Frequency estimation;Estimation;Speech enhancement;iterative adaptive approach;multichannel;covariance estimates;harmonic signal model},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925405.pdf},\n}\n\n
\n
\n\n\n
\n A method for estimating the noise covariance matrix in a multichannel setup is proposed. The method is based on the iterative adaptive approach (IAA), which only needs short segments of data to estimate the covariance matrix. Therefore, the method can be used for fast varying signals. The method is based on an assumption of the desired signal being harmonic, which is used for estimating the noise covariance matrix from the covariance matrix of the observed signal. The noise covariance estimate is used in the linearly constrained minimum variance (LCMV) filter and compared to an amplitude and phase estimation (APES) based filter. For a fixed number of samples, the performance in terms of signal-to-noise ratio can be increased by using the IAA method, whereas if the filter size is fixed and the number of samples in the APES based filter is increased, the APES based filter performs better.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A montage approach to sound texture synthesis.\n \n \n \n \n\n\n \n O'Leary, S.; and Röbel, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 939-943, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952307,\n  author = {S. O'Leary and A. Röbel},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A montage approach to sound texture synthesis},\n  year = {2014},\n  pages = {939-943},\n  abstract = {In this paper a novel algorithm for sound texture synthesis is presented. The goal of this algorithm is to produce new examples of a given sampled texture, the synthesized textures being of any desired duration. The algorithm is based on a montage approach to synthesis in that the synthesized texture is made up of pieces of the original sample concatenated together in a new sequence. This montage approach preserves both the high level evolution and low level detail of the original texture.},\n  keywords = {Fourier transforms;signal synthesis;sound texture synthesis;montage approach;STFT algorithm;short time Fourier transform algorithm;Atomic clocks;Atomic measurements;History;Sequential analysis;Algorithm design and analysis;Correlation;Probabilistic logic;Sound;texture;synthesis;concatenative},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925599.pdf},\n}\n\n
\n
\n\n\n
\n In this paper a novel algorithm for sound texture synthesis is presented. The goal of this algorithm is to produce new examples of a given sampled texture, the synthesized textures being of any desired duration. The algorithm is based on a montage approach to synthesis in that the synthesized texture is made up of pieces of the original sample concatenated together in a new sequence. This montage approach preserves both the high level evolution and low level detail of the original texture.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust feature extractors for continuous speech recognition.\n \n \n \n \n\n\n \n Alam, M. J.; Kenny, P.; Dumouchel, P.; and O'Shaughnessy, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 944-948, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952308,\n  author = {M. J. Alam and P. Kenny and P. Dumouchel and D. O'Shaughnessy},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Robust feature extractors for continuous speech recognition},\n  year = {2014},\n  pages = {944-948},\n  abstract = {This paper presents robust feature extractors for a continuous speech recognition task in matched and mismatched environments. The mismatched conditions may occur due to additive noise, different channel, and acoustic reverberation. In the conventional Mel-frequency cepstral coefficient (MFCC) feature extraction framework, a subband spectrum enhancement technique is incorporated to improve its robustness. We denote this front-end as robust MFCCs (RMFCC). Based on the gammatone and compressive gammachirp filter-banks, robust gammatone filterbank cepstral coefficients (RGFCC) and robust compressive gammachirp filterbank cepstral coefficients (RCGCC) are also presented for comparison. We also employ low-variance spectrum estimators such as multitaper, regularized minimum- variance distortionless response (RMVDR), instead of a discrete Fourier transform-based direct spectrum estimator for improving robustness against mismatched environments. Speech recognition performances of the robust feature extractors are evaluated in clean as well as multi-style training conditions of the AURORA-4 continuous speech recognition task. Experimental results depict that the RMFCC and low-variance spectrum-estimators-based robust feature extractors outperformed the MFCC, PNCC (power normalized cepstral coefficients), and ETSI-AFE features both in clean and multi-condition training conditions.},\n  keywords = {channel bank filters;discrete Fourier transforms;feature extraction;speech recognition;AURORA-4 continuous speech recognition task;low-variance spectrum estimators;RCGCC;robust compressive gammachirp filterbank cepstral coefficients;RGFCC;robust gammatone filterbank cepstral coefficients;robust MFCC;conventional Mel-frequency cepstral coefficient feature extraction framework;robust feature extractors;Feature extraction;Speech recognition;Robustness;Training;Speech;Mel frequency cepstral coefficient;Robust feature extractor;speech recognition;multi-style training;aurora 4;multitaper},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925601.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents robust feature extractors for a continuous speech recognition task in matched and mismatched environments. The mismatched conditions may occur due to additive noise, different channel, and acoustic reverberation. In the conventional Mel-frequency cepstral coefficient (MFCC) feature extraction framework, a subband spectrum enhancement technique is incorporated to improve its robustness. We denote this front-end as robust MFCCs (RMFCC). Based on the gammatone and compressive gammachirp filter-banks, robust gammatone filterbank cepstral coefficients (RGFCC) and robust compressive gammachirp filterbank cepstral coefficients (RCGCC) are also presented for comparison. We also employ low-variance spectrum estimators such as multitaper, regularized minimum- variance distortionless response (RMVDR), instead of a discrete Fourier transform-based direct spectrum estimator for improving robustness against mismatched environments. Speech recognition performances of the robust feature extractors are evaluated in clean as well as multi-style training conditions of the AURORA-4 continuous speech recognition task. Experimental results depict that the RMFCC and low-variance spectrum-estimators-based robust feature extractors outperformed the MFCC, PNCC (power normalized cepstral coefficients), and ETSI-AFE features both in clean and multi-condition training conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Topic dependent language modelling for spoken term detection.\n \n \n \n \n\n\n \n Kalantari, S.; Dean, D.; Sridharan, S.; and Wallace, R.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 949-953, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"TopicPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952309,\n  author = {S. Kalantari and D. Dean and S. Sridharan and R. Wallace},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Topic dependent language modelling for spoken term detection},\n  year = {2014},\n  pages = {949-953},\n  abstract = {This paper investigates the effect of topic dependent language models (TDLM) on phonetic spoken term detection (STD) using dynamic match lattice spotting (DMLS). Phonetic STD consists of two steps: indexing and search. The accuracy of indexing audio segments into phone sequences using phone recognition methods directly affects the accuracy of the final STD system. If the topic of a document in known, recognizing the spoken words and indexing them to an intermediate representation is an easier task and consequently, detecting a search word in it will be more accurate and robust. In this paper, we propose the use of TDLMs in the indexing stage to improve the accuracy of STD in situations where the topic of the audio document is known in advance. It is shown that using TDLMs instead of the traditional general language model (GLM) improves STD performance according to figure of merit (FOM) criteria.},\n  keywords = {indexing;speech recognition;topic dependent language modelling;spoken term detection;TDLM;STD;dynamic match lattice spotting;phone sequences;phone recognition methods;general language model;GLM;figure of merit criteria;indexing stage;search stage;Indexing;Accuracy;Speech;Lattices;Speech recognition;Hidden Markov models;spoken term detection;language modelling;indexing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926121.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates the effect of topic dependent language models (TDLM) on phonetic spoken term detection (STD) using dynamic match lattice spotting (DMLS). Phonetic STD consists of two steps: indexing and search. The accuracy of indexing audio segments into phone sequences using phone recognition methods directly affects the accuracy of the final STD system. If the topic of a document in known, recognizing the spoken words and indexing them to an intermediate representation is an easier task and consequently, detecting a search word in it will be more accurate and robust. In this paper, we propose the use of TDLMs in the indexing stage to improve the accuracy of STD in situations where the topic of the audio document is known in advance. It is shown that using TDLMs instead of the traditional general language model (GLM) improves STD performance according to figure of merit (FOM) criteria.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An automatic system for microphone self-localization using ambient sound.\n \n \n \n \n\n\n \n Zhayida, S.; Andersson, F.; Kuang, Y.; and Åström, K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 954-958, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952310,\n  author = {S. Zhayida and F. Andersson and Y. Kuang and K. Åström},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {An automatic system for microphone self-localization using ambient sound},\n  year = {2014},\n  pages = {954-958},\n  abstract = {In this paper, we develop a system for microphone self-localization based on ambient sound, without any assumptions on the 3D locations of the microphones and sound sources. We aim at developing a system capable of dealing with multiple moving sound sources. We will show that this is possible given that there are instances where there are only one dominating sound source. In the first step of the system we employ a feature detection and matching strategy. This produces TDOA data, possibly with missing data and with outliers. Then we use a robust and stratified approach for the parameter estimation. We use robust techniques to calculate initial estimates on the offsets parameters, followed by nonlinear optimization based on a rank criterion. Sequentially we use robust methods for calculating initial estimates of the sound source positions and microphone positions, followed by non-linear Maximum Likelihood estimation of all parameters. The methods are tested and verified using anechoic chamber sound recordings.},\n  keywords = {acoustic signal processing;array signal processing;feature extraction;maximum likelihood estimation;microphone arrays;nonlinear programming;time-of-arrival estimation;time-difference-of-arrival;anechoic chamber sound recordings;nonlinear maximum likelihood estimation;microphone positions;sound source positions;robust methods;rank criterion;nonlinear optimization;parameter estimation;TDOA data;feature matching strategy;feature detection strategy;multiple moving sound sources;3D locations;ambient sound;microphone self-localization;Microphones;Vectors;Three-dimensional displays;Robustness;Calibration;Arrays;Estimation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923937.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we develop a system for microphone self-localization based on ambient sound, without any assumptions on the 3D locations of the microphones and sound sources. We aim at developing a system capable of dealing with multiple moving sound sources. We will show that this is possible given that there are instances where there are only one dominating sound source. In the first step of the system we employ a feature detection and matching strategy. This produces TDOA data, possibly with missing data and with outliers. Then we use a robust and stratified approach for the parameter estimation. We use robust techniques to calculate initial estimates on the offsets parameters, followed by nonlinear optimization based on a rank criterion. Sequentially we use robust methods for calculating initial estimates of the sound source positions and microphone positions, followed by non-linear Maximum Likelihood estimation of all parameters. The methods are tested and verified using anechoic chamber sound recordings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perceptual coding-based Informed Source Separation.\n \n \n \n \n\n\n \n Kırbız, S.; Ozerov, A.; Liutkus, A.; and Girin, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 959-963, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PerceptualPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952311,\n  author = {S. Kırbız and A. Ozerov and A. Liutkus and L. Girin},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Perceptual coding-based Informed Source Separation},\n  year = {2014},\n  pages = {959-963},\n  abstract = {Informed Source Separation (ISS) techniques enable manipulation of the source signals that compose an audio mixture, based on a coder-decoder configuration. Provided the source signals are known at the encoder, a low-bitrate side-information is sent to the decoder and permits to achieve efficient source separation. Recent research has focused on a Coding-based ISS framework, which has an advantage to encode the desired audio objects, while exploiting their mixture in an information-theoretic framework. Here, we show how the perceptual quality of the separated sources can be improved by inserting perceptual source coding techniques in this framework, achieving a continuum of optimal bitrate-perceptual distortion trade-offs.},\n  keywords = {audio coding;codecs;decoding;source coding;source separation;perceptual coding-based informed source separation;ISS techniques;source signal manipulation;audio mixture;coder-decoder configuration;low-bitrate side-information;audio object encoding;information-theoretic framework;separated source perceptual quality;perceptual source coding techniques;optimal bitrate-perceptual distortion trade-offs;Psychoacoustic models;Tensile stress;Source separation;Source coding;Bit rate;Decoding;Informed source separation;source coding;perceptual models},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925139.pdf},\n}\n\n
\n
\n\n\n
\n Informed Source Separation (ISS) techniques enable manipulation of the source signals that compose an audio mixture, based on a coder-decoder configuration. Provided the source signals are known at the encoder, a low-bitrate side-information is sent to the decoder and permits to achieve efficient source separation. Recent research has focused on a Coding-based ISS framework, which has an advantage to encode the desired audio objects, while exploiting their mixture in an information-theoretic framework. Here, we show how the perceptual quality of the separated sources can be improved by inserting perceptual source coding techniques in this framework, achieving a continuum of optimal bitrate-perceptual distortion trade-offs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On estimation error outage for scalar Gauss-Markov processes sent over fading channels.\n \n \n \n \n\n\n \n Parseh, R.; and Kansanen, K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 964-968, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952312,\n  author = {R. Parseh and K. Kansanen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {On estimation error outage for scalar Gauss-Markov processes sent over fading channels},\n  year = {2014},\n  pages = {964-968},\n  abstract = {Measurements of a complex scalar linear Gauss-Markov process are sent over a fading channel. The fading channel is modeled as independent and identically distributed complex normal random variables with known realization at the decoder. The optimal estimator at the decoder is the Kalman filter with random instantaneous gain and error variance. To evaluate the quality of estimation at the receiver, the probability distribution function of the instantaneous estimation error variance and its outage probability are of interest. For the special case of the Rayleigh fading channels, upper and lower bounds for the outage probability are derived which provide insight and simple means for design purposes.},\n  keywords = {estimation theory;Gaussian processes;Kalman filters;Markov processes;probability;Rayleigh channels;telecommunication network reliability;estimation error outage;scalar Gauss Markov processes;complex normal random variables;Kalman filter;probability distribution function;instantaneous estimation error variance;Rayleigh fading channels;outage probability;Frequency modulation;Fading;Estimation error;Kalman filters;Channel estimation;Equations;Estimation Over Fading Channels;Kalman Filter;Outage Probability},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569911831.pdf},\n}\n\n
\n
\n\n\n
\n Measurements of a complex scalar linear Gauss-Markov process are sent over a fading channel. The fading channel is modeled as independent and identically distributed complex normal random variables with known realization at the decoder. The optimal estimator at the decoder is the Kalman filter with random instantaneous gain and error variance. To evaluate the quality of estimation at the receiver, the probability distribution function of the instantaneous estimation error variance and its outage probability are of interest. For the special case of the Rayleigh fading channels, upper and lower bounds for the outage probability are derived which provide insight and simple means for design purposes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Numerical investigations on the quasi-stationary response of antennas to wideband LFMCW excitation.\n \n \n \n \n\n\n \n Gardill, M.; Kay, D.; Weigel, R.; and Koelpin, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 969-973, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"NumericalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952333,\n  author = {M. Gardill and D. Kay and R. Weigel and A. Koelpin},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Numerical investigations on the quasi-stationary response of antennas to wideband LFMCW excitation},\n  year = {2014},\n  pages = {969-973},\n  abstract = {In this contribution, we numerically investigate if the quasi-stationary (QS) response is a valid approximation for the response of antennas excited by wideband linear-frequency modulated continuous waveforms. We give results for two idealized example systems, showing how the validity of the QS response is dependent on the system's resonant behavior. It will be shown that the error between exact output and QS response is approximately linearly dependent on the sweep-rate of the linear-frequency modulated excitation. We then conduct our simulations for a realistic wideband radar system operating from 5 GHz to 8 GHz and using impulse responses extracted from electromagnetic simulations of a dipole and a biconical antenna.},\n  keywords = {broadband antennas;conical antennas;CW radar;dipole antennas;FM radar;linear antennas;numerical analysis;radar antennas;transient response;numerical investigation;wideband LFMCW excitation;quasistationary antenna response;antenna QS response;wideband linear frequency modulated continuous waveform;wideband radar system;impulse response extraction;dipole electromagnetic simulation;biconical antenna electromagnetic simulation;frequency 5 GHz to 8 GHz;Radar antennas;Wideband;Radar;Dipole antennas;Broadband antennas;Approximation methods;Quasi-Stationary;FMCW;Antennas},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569913779.pdf},\n}\n\n
\n
\n\n\n
\n In this contribution, we numerically investigate if the quasi-stationary (QS) response is a valid approximation for the response of antennas excited by wideband linear-frequency modulated continuous waveforms. We give results for two idealized example systems, showing how the validity of the QS response is dependent on the system's resonant behavior. It will be shown that the error between exact output and QS response is approximately linearly dependent on the sweep-rate of the linear-frequency modulated excitation. We then conduct our simulations for a realistic wideband radar system operating from 5 GHz to 8 GHz and using impulse responses extracted from electromagnetic simulations of a dipole and a biconical antenna.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Multidimensional Cramér-Rao lower bound for non-uniformly sampled NMR signals.\n \n \n \n\n\n \n Månsson, A.; Jakobsson, A.; and Akke, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 974-978, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952334,\n  author = {A. Månsson and A. Jakobsson and M. Akke},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Multidimensional Cramér-Rao lower bound for non-uniformly sampled NMR signals},\n  year = {2014},\n  pages = {974-978},\n  abstract = {In this work, we extend recent results on the Cramér-Rao lower bound for multidimensional non-uniformly sampled Nuclear Magnetic Resonance (NMR) signals. The used signal model is more general than earlier models, allowing for the typically present variance differences between the direct and the different indirect sampling dimensions. The presented bound is verified with earlier presented 1-and R-dimensional bounds as well as with the obtainable estimation accuracy using the statistically efficient non-linear least squares estimator. Finally, the usability of the presented bound is illustrated as a measure of the obtainable accuracy using three different sampling schemes for a real 15N-HSQC NMR experiment.},\n  keywords = {biomedical NMR;least squares approximations;medical signal processing;signal sampling;multidimensional Cramér-Rao lower bound;nonuniformly sampled NMR signals;nuclear magnetic resonance signals;nonlinear least squares estimator;real 15N-HSQC NMR experiment;sampling schemes;Signal to noise ratio;Abstracts;Robustness;Sensors;Rivers;NMR spectroscopy;Cramér-Rao lower bound;Non-uniform sampling},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n In this work, we extend recent results on the Cramér-Rao lower bound for multidimensional non-uniformly sampled Nuclear Magnetic Resonance (NMR) signals. The used signal model is more general than earlier models, allowing for the typically present variance differences between the direct and the different indirect sampling dimensions. The presented bound is verified with earlier presented 1-and R-dimensional bounds as well as with the obtainable estimation accuracy using the statistically efficient non-linear least squares estimator. Finally, the usability of the presented bound is illustrated as a measure of the obtainable accuracy using three different sampling schemes for a real 15N-HSQC NMR experiment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A low complexity coherent CPM receiver with modulation index estimation.\n \n \n \n \n\n\n \n Messai, M.; Guilloud, F.; and Amis, K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 979-983, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952335,\n  author = {M. Messai and F. Guilloud and K. Amis},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A low complexity coherent CPM receiver with modulation index estimation},\n  year = {2014},\n  pages = {979-983},\n  abstract = {In this paper we address the problem of low-complexity coherent detection of continuous phase modulation (CPM) signals. We exploit the per-survivor-process technique to build a reduced-state trellis and apply a Viterbi algorithm with modified metrics. In the case where the modulation index can vary, we propose a maximum-likelihood (ML) estimation of the modulation index and compare the performance of the resulting structure with a non-coherent receiver structure of the state of the art. Simulations on an additive white Gaussian noise (AWGN) channel both for binary and M-ary CPM show the efficiency of the proposed receiver.},\n  keywords = {AWGN channels;continuous phase modulation;maximum likelihood estimation;radio receivers;signal detection;M-ary CPM;binary CPM;AWGN channel;additive white Gaussian noise channel;noncoherent receiver structure;maximum-likelihood estimation;Viterbi algorithm;reduced-state trellis;per-survivor-process technique;CPM signals;continuous phase modulation signals;low-complexity coherent detection;modulation index estimation;CPM receiver;Receivers;Modulation;Measurement;Maximum likelihood estimation;Bluetooth;Viterbi algorithm;CPM;modulation index estimation;persurvivor processing;reduced-complexity;Viterbi decoding;modulation index mismatch},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922777.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we address the problem of low-complexity coherent detection of continuous phase modulation (CPM) signals. We exploit the per-survivor-process technique to build a reduced-state trellis and apply a Viterbi algorithm with modified metrics. In the case where the modulation index can vary, we propose a maximum-likelihood (ML) estimation of the modulation index and compare the performance of the resulting structure with a non-coherent receiver structure of the state of the art. Simulations on an additive white Gaussian noise (AWGN) channel both for binary and M-ary CPM show the efficiency of the proposed receiver.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Cramér-Rao Bound for finite streams of pulses.\n \n \n \n\n\n \n Bernhardt, S.; Boyer, R.; Marcos, S.; Eldar, Y. C.; and Larzabal, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 984-988, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952336,\n  author = {S. Bernhardt and R. Boyer and S. Marcos and Y. C. Eldar and P. Larzabal},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Cramér-Rao Bound for finite streams of pulses},\n  year = {2014},\n  pages = {984-988},\n  abstract = {Sampling a finite stream of filtered pulses violates the bandlimited assumption of the Nyquist-Shannon sampling theory. However, recent low rate sampling schemes have shown that these sparse signals can be sampled with perfect reconstruction at their rate of innovation. To reach this goal in the presence of noise, an estimation procedure is needed to estimate the time-delay and the amplitudes of each pulse. To assess the quality of any unbiased estimator, it is standard to use the Cramér-Rao Bound (CRB) which provides a lower bound on the Mean Squared Error (MSE) of any unbiased estimator. In this work, analytic expressions of the Cramér-Rao Bound are proposed for an arbitrary number of filtered pulses. Using orthogonality properties on the filtering kernels, an approximate compact expression of the CRB is provided. The choice of the kernel is discussed from the point of view of the estimation accuracy.},\n  keywords = {delay estimation;filtering theory;mean square error methods;signal reconstruction;signal sampling;Cramér-Rao bound;finite pulse stream sampling;Nyquist-Shannon sampling theory;low rate sampling schemes;sparse signals;estimation procedure;time-delay estimation;unbiased estimator quality;lower bound;mean squared error;MSE;analytic expressions;filtering kernels;orthogonality properties;filtered pulses;CRB approximate compact expression;Kernel;Approximation methods;Noise;Technological innovation;Estimation;Vectors},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Sampling a finite stream of filtered pulses violates the bandlimited assumption of the Nyquist-Shannon sampling theory. However, recent low rate sampling schemes have shown that these sparse signals can be sampled with perfect reconstruction at their rate of innovation. To reach this goal in the presence of noise, an estimation procedure is needed to estimate the time-delay and the amplitudes of each pulse. To assess the quality of any unbiased estimator, it is standard to use the Cramér-Rao Bound (CRB) which provides a lower bound on the Mean Squared Error (MSE) of any unbiased estimator. In this work, analytic expressions of the Cramér-Rao Bound are proposed for an arbitrary number of filtered pulses. Using orthogonality properties on the filtering kernels, an approximate compact expression of the CRB is provided. The choice of the kernel is discussed from the point of view of the estimation accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Instantaneous parameters estimation algorithm for noisy AM-FM oscillatory signals.\n \n \n \n \n\n\n \n Azarov, E.; Vashkevich, M.; and Petrovsky, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 989-993, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"InstantaneousPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952337,\n  author = {E. Azarov and M. Vashkevich and A. Petrovsky},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Instantaneous parameters estimation algorithm for noisy AM-FM oscillatory signals},\n  year = {2014},\n  pages = {989-993},\n  abstract = {The paper addresses the problem of estimation of amplitude envelope and instantaneous frequency of an amplitude and frequency modulated (AM-FM) signal in noisy conditions. The algorithm proposed in the paper utilizes derivatives of the signal and is analogous to well-known Energy Separation Algorithms (ESA) based on Teager-Kaiser energy operator (TEO). The formulation of the algorithm is based on Prony's method that provides estimates of phase and damping factor as well. Compared to ESA the proposed algorithm has a very close performance for pure oscillatory signals and a better performance for signals with additive white noise.},\n  keywords = {amplitude modulation;AWGN;frequency modulation;parameter estimation;signal processing;instantaneous parameter estimation algorithm;noisy AM-FM oscillatory signals;amplitude envelope;instantaneous frequency;energy separation algorithms;Teager-Kaiser energy operator;TEO;Prony's method;additive white noise;Signal processing algorithms;Estimation;Frequency estimation;Algorithm design and analysis;Damping;Signal to noise ratio;Time-frequency analysis;estimation of instantaneous frequency;Teager-Kaiser energy operator;Prony's method},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924699.pdf},\n}\n\n
\n
\n\n\n
\n The paper addresses the problem of estimation of amplitude envelope and instantaneous frequency of an amplitude and frequency modulated (AM-FM) signal in noisy conditions. The algorithm proposed in the paper utilizes derivatives of the signal and is analogous to well-known Energy Separation Algorithms (ESA) based on Teager-Kaiser energy operator (TEO). The formulation of the algorithm is based on Prony's method that provides estimates of phase and damping factor as well. Compared to ESA the proposed algorithm has a very close performance for pure oscillatory signals and a better performance for signals with additive white noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analysis of coloured noise in received signal strength using the Allan Variance.\n \n \n \n \n\n\n \n Luo, C.; Casaseca-de-la-Higuera, P.; McClean, S.; Parr, G.; and Grecos, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 994-998, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AnalysisPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952338,\n  author = {C. Luo and P. Casaseca-de-la-Higuera and S. McClean and G. Parr and C. Grecos},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Analysis of coloured noise in received signal strength using the Allan Variance},\n  year = {2014},\n  pages = {994-998},\n  abstract = {The received signal strength (RSS) of wireless signals has been widely used in communications, localization and tracking. Theoretical modelling and practical applications often make a white noise assumption when dealing with RSS measurements. However, as we will show in this paper, the noise present in RSS measurements has time-dependency properties. In order to study these properties and provide noise characterisation, we propose to use the Allan Variance (AVAR) and show its better performance in comparison with direct analysis in the frequency domain using a periodogram. We further study the contribution of each component by testing real RSS data. Our results confirm that the noise associated with RSS signals is actually coloured and demonstrate the appropriateness of AVAR for the identification and characterisation of these components.},\n  keywords = {frequency-domain analysis;radio networks;signal processing;white noise;coloured noise analysis;received signal strength;Allan variance;wireless signals;white noise assumption;RSS measurements;time-dependency properties;AVAR;direct frequency domain analysis;periodogram;real RSS data testing;White noise;Noise measurement;Colored noise;Estimation;Wireless communication;Educational institutions;RSS;coloured noise;Allan variance;noise characterisation;802.11},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924713.pdf},\n}\n\n
\n
\n\n\n
\n The received signal strength (RSS) of wireless signals has been widely used in communications, localization and tracking. Theoretical modelling and practical applications often make a white noise assumption when dealing with RSS measurements. However, as we will show in this paper, the noise present in RSS measurements has time-dependency properties. In order to study these properties and provide noise characterisation, we propose to use the Allan Variance (AVAR) and show its better performance in comparison with direct analysis in the frequency domain using a periodogram. We further study the contribution of each component by testing real RSS data. Our results confirm that the noise associated with RSS signals is actually coloured and demonstrate the appropriateness of AVAR for the identification and characterisation of these components.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Covalsa: Covariance estimation from compressive measurements using alternating minimization.\n \n \n \n\n\n \n Bioucas-Dias, J. M.; Cohen, D.; and Eldar, Y. C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 999-1003, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952339,\n  author = {J. M. Bioucas-Dias and D. Cohen and Y. C. Eldar},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Covalsa: Covariance estimation from compressive measurements using alternating minimization},\n  year = {2014},\n  pages = {999-1003},\n  abstract = {The estimation of covariance matrices from compressive measurements has recently attracted considerable research efforts in various fields of science and engineering. Owing to the small number of observations, the estimation of the covariance matrices is a severely ill-posed problem. This can be overcome by exploiting prior information about the structure of the covariance matrix. This paper presents a class of convex formulations and respective solutions to the high-dimensional covariance matrix estimation problem under compressive measurements, imposing either Toeplitz, sparseness, null-pattern, low rank, or low permuted rank structure on the solution, in addition to positive semi-definiteness. To solve the optimization problems, we introduce the Co-Variance by Augmented Lagrangian Shrinkage Algorithm (CoVALSA), which is an instance of the Split Augmented Lagrangian Shrinkage Algorithm (SALSA). We illustrate the effectiveness of our approach in comparison with state-of-the-art algorithms.},\n  keywords = {compressed sensing;covariance matrices;estimation theory;minimisation;SALSA;split augmented Lagrangian shrinkage algorithm;CoVALSA;covariance by augmented Lagrangian shrinkage algorithm;convex formulations;covariance matrices;alternating minimization;compressive measurements;covariance estimation;Covariance matrices;Estimation;Vectors;Optimization;Signal to noise ratio;Sparse matrices;Algorithm design and analysis;Covariance matrix estimation;compressive acquisition;alternating optimization;SALSA},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The estimation of covariance matrices from compressive measurements has recently attracted considerable research efforts in various fields of science and engineering. Owing to the small number of observations, the estimation of the covariance matrices is a severely ill-posed problem. This can be overcome by exploiting prior information about the structure of the covariance matrix. This paper presents a class of convex formulations and respective solutions to the high-dimensional covariance matrix estimation problem under compressive measurements, imposing either Toeplitz, sparseness, null-pattern, low rank, or low permuted rank structure on the solution, in addition to positive semi-definiteness. To solve the optimization problems, we introduce the Co-Variance by Augmented Lagrangian Shrinkage Algorithm (CoVALSA), which is an instance of the Split Augmented Lagrangian Shrinkage Algorithm (SALSA). We illustrate the effectiveness of our approach in comparison with state-of-the-art algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Computational cost of Chirp Z-transform and Generalized Goertzel algorithm.\n \n \n \n \n\n\n \n Rajmic, P.; Prusa, Z.; and Wiesmeyr, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1004-1008, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ComputationalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952340,\n  author = {P. Rajmic and Z. Prusa and C. Wiesmeyr},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Computational cost of Chirp Z-transform and Generalized Goertzel algorithm},\n  year = {2014},\n  pages = {1004-1008},\n  abstract = {Two natural competitors in the area of narrow-band spectrum analysis, namely the Chirp Z-transform (CZT) and the Generalized Goertzel algorithm (GGA), are taken and compared, with the focus on the computational cost. We present results showing that for real-input data, the GGA is preferable over the CZT in a range of practical situations. This is shown both in theory and in practice.},\n  keywords = {computational complexity;spectral analysis;Z transforms;computational cost;chirp Z-transform;CZT;generalized Goertzel algorithm;GGA;narrow-band spectrum analysis;Signal processing algorithms;Chirp;Standards;Algorithm design and analysis;Computational efficiency;Spectral analysis;Discrete Fourier transforms;Generalized Goertzel Algorithm;Chirp Z-transform;spectrum analysis;computational complexity;comparison;speed},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925105.pdf},\n}\n\n
\n
\n\n\n
\n Two natural competitors in the area of narrow-band spectrum analysis, namely the Chirp Z-transform (CZT) and the Generalized Goertzel algorithm (GGA), are taken and compared, with the focus on the computational cost. We present results showing that for real-input data, the GGA is preferable over the CZT in a range of practical situations. This is shown both in theory and in practice.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n High resolution stacking of seismic data.\n \n \n \n \n\n\n \n Covre, M. R.; Barros, T.; and da Rocha Lopes , R.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1009-1013, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"HighPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952341,\n  author = {M. R. Covre and T. Barros and R. {da Rocha Lopes}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {High resolution stacking of seismic data},\n  year = {2014},\n  pages = {1009-1013},\n  abstract = {The stacking procedure is a key part of the seismic processing. Historically, this part of the processing is done using seismic acquisition data (traces) with common features such as the common midpoint between source and receiver. These traces are combined to construct an ideal trace where the source and receiver are virtually placed at the same place. The traditional stacking only performs a simple sum of the traces. This work proposes a different way to perform the stacking, which uses the singular value decomposition of a data matrix to create an eigenimage where the noise and interferences are attenuated. The proposed technique is called Eigenstacking. Results of the stacking and eigenstacking are compared using synthetic and real data.},\n  keywords = {eigenvalues and eigenfunctions;geophysical signal processing;seismology;singular value decomposition;high resolution stacking;seismic data;singular value decomposition;data matrix;eigenstacking;Stacking;Noise;Receivers;Matrix decomposition;Proposals;Equations;Mathematical model;SVD;stacking;high resolution;seismic},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925141.pdf},\n}\n\n
\n
\n\n\n
\n The stacking procedure is a key part of the seismic processing. Historically, this part of the processing is done using seismic acquisition data (traces) with common features such as the common midpoint between source and receiver. These traces are combined to construct an ideal trace where the source and receiver are virtually placed at the same place. The traditional stacking only performs a simple sum of the traces. This work proposes a different way to perform the stacking, which uses the singular value decomposition of a data matrix to create an eigenimage where the noise and interferences are attenuated. The proposed technique is called Eigenstacking. Results of the stacking and eigenstacking are compared using synthetic and real data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Canceling stationary interference signals exploiting secondary data.\n \n \n \n \n\n\n \n Swärd, J.; and Jakobsson, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1014-1018, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CancelingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952342,\n  author = {J. Swärd and A. Jakobsson},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Canceling stationary interference signals exploiting secondary data},\n  year = {2014},\n  pages = {1014-1018},\n  abstract = {In this paper, we propose a novel interference cancellation method that exploits secondary data to estimate stationary interference components present in both the primary and the secondary data sets, thereby allowing for the removal of such interference from the data sets, even when these components share frequencies with the signal of interest. The algorithm estimates the present interference components one frequency at a time, thus enabling for a computationally efficient algorithm, that require only a limited amount of secondary data. Numerical examples using both simulated and measured data show that the proposed methods offers a notable gain in performance as compared to other interference cancellation methods.},\n  keywords = {interference (signal);interference suppression;stationary interference signal cancellation method;stationary interference component estimation;secondary data sets;primary data sets;interference removal;signal of interest;Frequency measurement;Interference cancellation;Noise;Data models;Noise measurement;Signal processing algorithms;Interference cancellation;Radio frequency spectroscopy;Signal of interest-free data},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925279.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a novel interference cancellation method that exploits secondary data to estimate stationary interference components present in both the primary and the secondary data sets, thereby allowing for the removal of such interference from the data sets, even when these components share frequencies with the signal of interest. The algorithm estimates the present interference components one frequency at a time, thus enabling for a computationally efficient algorithm, that require only a limited amount of secondary data. Numerical examples using both simulated and measured data show that the proposed methods offers a notable gain in performance as compared to other interference cancellation methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multitaper estimation of the coherence spectrum in low SNR.\n \n \n \n \n\n\n \n Brynolfsson, J.; and Hansson-Sandsten, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1019-1023, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"MultitaperPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952343,\n  author = {J. Brynolfsson and M. Hansson-Sandsten},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Multitaper estimation of the coherence spectrum in low SNR},\n  year = {2014},\n  pages = {1019-1023},\n  abstract = {A pseudo coherence estimate using multitapers is presented. The estimate has better localization for sinusoids and is shown to have lower variance for disturbances compared to the usual coherence estimator. This makes it superior in terms of finding coherent frequencies between two sinusoidal signals; even when observed in low SNR. Different sets of multitapers are investigated and the weights of the final coherence estimate are adjusted for a low-biased estimate of a single sinusoid. The proposed method is more computationally efficient than data dependent methods, and does still give comparable results.},\n  keywords = {signal processing;multitaper estimation;pseudo coherence estimate;sinusoidal signals;SNR;Coherence;Estimation;Signal to noise ratio;Standards;Equations;Mathematical model;coherence;cross-spectrum;multitaper;sinusoids},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925367.pdf},\n}\n\n
\n
\n\n\n
\n A pseudo coherence estimate using multitapers is presented. The estimate has better localization for sinusoids and is shown to have lower variance for disturbances compared to the usual coherence estimator. This makes it superior in terms of finding coherent frequencies between two sinusoidal signals; even when observed in low SNR. Different sets of multitapers are investigated and the weights of the final coherence estimate are adjusted for a low-biased estimate of a single sinusoid. The proposed method is more computationally efficient than data dependent methods, and does still give comparable results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Novel radar signal models using nonlinear frequency modulation.\n \n \n \n \n\n\n \n Alphonse, S.; and Williamson, G. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1024-1028, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"NovelPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952344,\n  author = {S. Alphonse and G. A. Williamson},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Novel radar signal models using nonlinear frequency modulation},\n  year = {2014},\n  pages = {1024-1028},\n  abstract = {Two new radar signal models using nonlinear frequency modulation are proposed and investigated with respect to enhancing the target's range estimation and reducing the sidelobe level. The performance of the proposed signal models is compared to the currently popular linear and nonlinear frequency modulation signal models. The Cramer Rao Lower Bound along with main lobe width and the peak to sidelobe ratio are used for comparing the signal models to show that better range accuracy and smaller sidelobes can be achieved with the proposed signal models.},\n  keywords = {FM radar;matched filters;radar signal processing;NLFM signals;matched filter;Cramer Rao lower bound;nonlinear frequency modulation signal model;sidelobe level;target range estimation;radar signal models;Frequency modulation;Radar;Time-frequency analysis;Brain models;Bandwidth;Computational modeling;frequency modulation;NLFM;matched filter;radar;CRLB;PSLR},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925617.pdf},\n}\n\n
\n
\n\n\n
\n Two new radar signal models using nonlinear frequency modulation are proposed and investigated with respect to enhancing the target's range estimation and reducing the sidelobe level. The performance of the proposed signal models is compared to the currently popular linear and nonlinear frequency modulation signal models. The Cramer Rao Lower Bound along with main lobe width and the peak to sidelobe ratio are used for comparing the signal models to show that better range accuracy and smaller sidelobes can be achieved with the proposed signal models.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Enhanced joint data detection and turbo map channel estimation using randomly rotated constellations.\n \n \n \n \n\n\n \n Missaoui, N.; Kammoun, I.; and Siala, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1029-1033, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EnhancedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952345,\n  author = {N. Missaoui and I. Kammoun and M. Siala},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Enhanced joint data detection and turbo map channel estimation using randomly rotated constellations},\n  year = {2014},\n  pages = {1029-1033},\n  abstract = {In this paper, we propose a joint data detection and a turbo maximum-a-posteriori (MAP) time-varying channel estimation in Slotted ALOHA MIMO systems using rotated constellations diversity. Our main idea is to use a randomly rotated and unrotated constellation, together with coding and interleaving, for each user in order to increase the diversity order and to improve the collision resolution at the receiver side. The burst-by-burst turbo-MAP channel estimator proposed is based on Space Alternating Expectation Maximization (SAGE) algorithm. Our proposed approach allows an efficient separation of colliding packets even if they are received with equal powers. Simulation results are given to support our claims.},\n  keywords = {channel estimation;expectation-maximisation algorithm;MIMO communication;enhanced joint data detection;turbo map channel estimation;randomly rotated constellations;turbo maximum-a-posteriori;MAP;time varying channel estimation;Slotted ALOHA MIMO systems;rotated constellations diversity;collision resolution;burst-by-burst turbo-MAP channel estimator;SAGE algorithm;space alternating expectation maximization;Channel estimation;Vectors;Bit error rate;MIMO;Receiving antennas;Silicon;Time variable channel;Slotted ALOHA;Collision resolution;Maximum a-posteriori;SAGE},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926533.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a joint data detection and a turbo maximum-a-posteriori (MAP) time-varying channel estimation in Slotted ALOHA MIMO systems using rotated constellations diversity. Our main idea is to use a randomly rotated and unrotated constellation, together with coding and interleaving, for each user in order to increase the diversity order and to improve the collision resolution at the receiver side. The burst-by-burst turbo-MAP channel estimator proposed is based on Space Alternating Expectation Maximization (SAGE) algorithm. Our proposed approach allows an efficient separation of colliding packets even if they are received with equal powers. Simulation results are given to support our claims.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Direction-of-arrival estimation using multi-frequency co-prime arrays.\n \n \n \n \n\n\n \n BouDaher, E.; Jia, Y.; Ahmad, F.; and Amin, M. G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1034-1038, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Direction-of-arrivalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952346,\n  author = {E. BouDaher and Y. Jia and F. Ahmad and M. G. Amin},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Direction-of-arrival estimation using multi-frequency co-prime arrays},\n  year = {2014},\n  pages = {1034-1038},\n  abstract = {In this paper, we present a new method for increasing the number of resolvable sources in direction-of-arrival estimation using co-prime arrays. This is achieved by utilizing multiple frequencies to fill in the missing elements in the difference coarray of the co-prime array corresponding to the reference frequency. For high signal-to-noise ratio, the multi-frequency approach effectively utilizes all of the degrees-of-freedom offered by the coarray, provided that the sources have proportional spectra. The performance of the proposed method is evaluated through numerical simulations.},\n  keywords = {antenna arrays;direction-of-arrival estimation;direction-of-arrival estimation;multifrequency co-prime arrays;difference coarray;signal-to-noise ratio;numerical simulations;Direction-of-arrival estimation;Correlation;Vectors;Estimation;Multiple signal classification;Sensor arrays;Co-prime arrays;DOA estimation;difference coarray;multiple frequencies},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924197.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present a new method for increasing the number of resolvable sources in direction-of-arrival estimation using co-prime arrays. This is achieved by utilizing multiple frequencies to fill in the missing elements in the difference coarray of the co-prime array corresponding to the reference frequency. For high signal-to-noise ratio, the multi-frequency approach effectively utilizes all of the degrees-of-freedom offered by the coarray, provided that the sources have proportional spectra. The performance of the proposed method is evaluated through numerical simulations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n DOA estimation and signal separation using antenna with time varying response.\n \n \n \n \n\n\n \n Dvorkind, T. G.; and Greenberg, E.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1039-1043, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DOAPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952347,\n  author = {T. G. Dvorkind and E. Greenberg},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {DOA estimation and signal separation using antenna with time varying response},\n  year = {2014},\n  pages = {1039-1043},\n  abstract = {In this paper we suggest a new algorithm for Direction of Arrival (DOA) estimation and signal separation using a novel antenna element with a time variant radiation pattern. With the suggested approach, signals arriving from various spatial directions are acquired in this sensor with different time varying signatures, due to the antenna's continuously changing radiation pattern. We show that if the radiation pattern is varied in a periodical manner and sufficiently fast compared to the bandwidth of the received signals, then multiple sources of radiation can be detected and their direction of arrival estimated. The suggested approach is a novel alternative to array based signal processing, as it allows to perform spatial processing tasks without exploiting multiple sensing elements.},\n  keywords = {antenna radiation patterns;array signal processing;direction-of-arrival estimation;source separation;DOA estimation;signal separation;time varying response;direction of arrival estimation;antenna element;time variant radiation pattern;spatial directions;sensor;time varying signatures;antenna radiation pattern;received signal bandwidth;array based signal processing;spatial processing tasks;multiple sensing elements;Direction-of-arrival estimation;Antenna radiation patterns;Dipole antennas;Arrays;Estimation;Source separation;DOA;MUSIC;reconfigurable antenna},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569895833.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we suggest a new algorithm for Direction of Arrival (DOA) estimation and signal separation using a novel antenna element with a time variant radiation pattern. With the suggested approach, signals arriving from various spatial directions are acquired in this sensor with different time varying signatures, due to the antenna's continuously changing radiation pattern. We show that if the radiation pattern is varied in a periodical manner and sufficiently fast compared to the bandwidth of the received signals, then multiple sources of radiation can be detected and their direction of arrival estimated. The suggested approach is a novel alternative to array based signal processing, as it allows to perform spatial processing tasks without exploiting multiple sensing elements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Three CS-based beamformers for single snapshot DOA estimation.\n \n \n \n \n\n\n \n Fortunati, S.; Grasso, R.; Gini, F.; Greco, M. S.; and LePage, K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1044-1048, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ThreePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952348,\n  author = {S. Fortunati and R. Grasso and F. Gini and M. S. Greco and K. LePage},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Three CS-based beamformers for single snapshot DOA estimation},\n  year = {2014},\n  pages = {1044-1048},\n  abstract = {In this work, the estimation of the Directions of Arrival (DOAs) of multiple source signals from a single observation vector is considered. In particular, the estimation, detection and super-resolution performance of three algorithms based on the theory of Compressed Sensing (the classical l1-minimization or LASSO, the fast smooth l0-minimization, and the SPICE algorithm) are analyzed and compared with the classical Fourier beamformer. This comparison is carried out using both simulated data and real sonar data.},\n  keywords = {array signal processing;compressed sensing;direction-of-arrival estimation;Fourier transforms;minimisation;sonar;real sonar data;simulated data;Fourier beamformer;SPICE algorithm;smooth l0-minimization;LASSO;classical l1-minimization;compressed sensing;super-resolution performance;single observation vector;multiple source signals;three CS-based beamformers;single snapshot DOA estimation;directions of arrival estimation;SPICE;Direction-of-arrival estimation;Estimation;Spatial resolution;Signal resolution;Signal to noise ratio;Arrays;Compressive Sensing;DOA estimation;Fourier Beamformer;Super-resolution;Sonar},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923179.pdf},\n}\n\n
\n
\n\n\n
\n In this work, the estimation of the Directions of Arrival (DOAs) of multiple source signals from a single observation vector is considered. In particular, the estimation, detection and super-resolution performance of three algorithms based on the theory of Compressed Sensing (the classical l1-minimization or LASSO, the fast smooth l0-minimization, and the SPICE algorithm) are analyzed and compared with the classical Fourier beamformer. This comparison is carried out using both simulated data and real sonar data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust high-resolution DOA estimation with array pre-calibration.\n \n \n \n \n\n\n \n Weiss, C.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1049-1052, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952349,\n  author = {C. Weiss and A. M. Zoubir},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Robust high-resolution DOA estimation with array pre-calibration},\n  year = {2014},\n  pages = {1049-1052},\n  abstract = {A robust high-resolution technique for DOA estimation in the presence of array imperfections such as sensor position errors and non-uniform sensor gain is presented. When the basis matrix of a sparse DOA estimation framework is derived from an ideal model, array errors cannot be handled which causes performance deterioration. Array pre-calibration via robust steering vector estimation yields an improved overcomplete basis matrix. It alleviates the delicate problem of selecting the regularization parameter of the optimization problem and improves the performance significantly. Thus, closely spaced sources can be resolved in the presence of severe array imperfections, even at low SNRs.},\n  keywords = {array signal processing;direction-of-arrival estimation;optimisation;sensor arrays;sensor placement;vectors;optimization problem;direction-of-arrival estimation;regularization parameter;robust high-resolution estimation;array pre-calibration;robust steering vector estimation;sparse DOA estimation;nonuniform sensor gain;sensor position errors;array imperfections;Arrays;Robustness;Signal to noise ratio;Estimation;Direction-of-arrival estimation;Vectors;sparse regularization;array imperfections;robust DOA estimation;array calibration},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925283.pdf},\n}\n\n
\n
\n\n\n
\n A robust high-resolution technique for DOA estimation in the presence of array imperfections such as sensor position errors and non-uniform sensor gain is presented. When the basis matrix of a sparse DOA estimation framework is derived from an ideal model, array errors cannot be handled which causes performance deterioration. Array pre-calibration via robust steering vector estimation yields an improved overcomplete basis matrix. It alleviates the delicate problem of selecting the regularization parameter of the optimization problem and improves the performance significantly. Thus, closely spaced sources can be resolved in the presence of severe array imperfections, even at low SNRs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint DOA and multi-pitch estimation via block sparse dictionary learning.\n \n \n \n \n\n\n \n Kronvall, T.; Adalbjörnsson, S. I.; and Jakobsson, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1053-1057, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952350,\n  author = {T. Kronvall and S. I. Adalbjörnsson and A. Jakobsson},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Joint DOA and multi-pitch estimation via block sparse dictionary learning},\n  year = {2014},\n  pages = {1053-1057},\n  abstract = {In this paper, we introduce a novel sparse method for joint estimation of the direction of arrivals (DOAs) and pitches of a set of multi-pitch signals impinging on a sensor array. Extending on earlier approaches, we formulate a novel dictionary learning framework from which an estimate is formed without making assumptions on the model orders. The proposed method alternatively uses a block sparse approach to estimate the pitches, using an alternating direction method of multipliers framework, and alternatively a nonlinear least squares approach to estimate the DOAs. The preferable performance of the proposed algorithm, as compared to earlier methods, is shown using numerical examples.},\n  keywords = {array signal processing;direction-of-arrival estimation;least squares approximations;joint DOA-multipitch estimation;block sparse dictionary learning;direction-of-arrival estimation;multipitch signals;sensor array;dictionary learning framework;model orders;block sparse approach;pitch estimation;alternating direction method;nonlinear least square approach;Dictionaries;Estimation;Direction-of-arrival estimation;Joints;Arrays;Harmonic analysis;Acoustics;multi-pitch estimation;group sparsity;block sparsity;dictionary learning;ADMM;direction-of-arrival},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925091.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we introduce a novel sparse method for joint estimation of the direction of arrivals (DOAs) and pitches of a set of multi-pitch signals impinging on a sensor array. Extending on earlier approaches, we formulate a novel dictionary learning framework from which an estimate is formed without making assumptions on the model orders. The proposed method alternatively uses a block sparse approach to estimate the pitches, using an alternating direction method of multipliers framework, and alternatively a nonlinear least squares approach to estimate the DOAs. The preferable performance of the proposed algorithm, as compared to earlier methods, is shown using numerical examples.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Per-Pixel Mirror-Based Acquisition Method for video compressive sensing.\n \n \n \n \n\n\n \n Lima, J. A.; Miosso, C. J.; and Farias, M. C. Q.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1058-1062, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Per-PixelPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952351,\n  author = {J. A. Lima and C. J. Miosso and M. C. Q. Farias},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Per-Pixel Mirror-Based Acquisition Method for video compressive sensing},\n  year = {2014},\n  pages = {1058-1062},\n  abstract = {High-speed videos are essential to many types of scientific investigations. However, using high-speed cameras to directly acquire these videos is prohibitively expensive in many circumstances. This paper proposes a compressive sensing-based method for obtaining high-speed videos using low-speed cameras, which we call the Per-Pixel Mirror-Based Acquisition Method. The proposed technique is light efficient and generates time-independent samples. We compare the reconstruction results of the proposed technique with other techniques available in the literature in terms of signal-to-error (SER) ratios, for natural and synthetic videos. For the tested real videos, the proposed method provided an improvement in SER ranging from 3 to 28 dB, with respect to known techniques such as the flutter shutter and the per-pixel shutter. The actual improvement is higher for higher levels of sparsity in the used transformed representations and for lower used sub-sampling rates.},\n  keywords = {compressed sensing;video cameras;video signal processing;per-pixel shutter;flutter shutter;synthetic videos;natural videos;SER ratios;signal-to-error ratios;reconstruction results;time-independent samples;low-speed cameras;high-speed cameras;high-speed videos;per-pixel mirror-based acquisition;video compressive sensing;Cameras;Mirrors;Image reconstruction;Optimization;Compressed sensing;Sensors;TV;compressive sensing;computational camera;high-speed imaging;video acquisition},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925429.pdf},\n}\n\n
\n
\n\n\n
\n High-speed videos are essential to many types of scientific investigations. However, using high-speed cameras to directly acquire these videos is prohibitively expensive in many circumstances. This paper proposes a compressive sensing-based method for obtaining high-speed videos using low-speed cameras, which we call the Per-Pixel Mirror-Based Acquisition Method. The proposed technique is light efficient and generates time-independent samples. We compare the reconstruction results of the proposed technique with other techniques available in the literature in terms of signal-to-error (SER) ratios, for natural and synthetic videos. For the tested real videos, the proposed method provided an improvement in SER ranging from 3 to 28 dB, with respect to known techniques such as the flutter shutter and the per-pixel shutter. The actual improvement is higher for higher levels of sparsity in the used transformed representations and for lower used sub-sampling rates.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Accurate image registration using approximate Strang-Fix and an application in super-resolution.\n \n \n \n \n\n\n \n Scholefield, A.; and Dragotti, P. L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1063-1067, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AccuratePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952352,\n  author = {A. Scholefield and P. L. Dragotti},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Accurate image registration using approximate Strang-Fix and an application in super-resolution},\n  year = {2014},\n  pages = {1063-1067},\n  abstract = {Accurate registration is critical to most multi-channel signal processing setups, including image super-resolution. In this paper we use modern sampling theory to propose a new robust registration algorithm that works with arbitrary sampling kernels. The algorithm accurately approximates continuous-time Fourier coefficients from discrete-time samples. These Fourier coefficients can be used to construct an over-complete system, which can be solved to approximate translational motion at around 100-th of a pixel accuracy. The over-completeness of the system provides robustness to noise and other modelling errors. For example we show an image registration result for images that have slightly different backgrounds, due to a viewpoint translation. Our previous registration techniques, based on similar sampling theory, can provide a similar accuracy but not under these more general conditions. Simulation results demonstrate the accuracy and robustness of the approach and demonstrate the potential applications in image super-resolution.},\n  keywords = {Fourier analysis;image resolution;sampling methods;accurate image registration;multichannel signal processing;image superresolution;sampling theory;robust registration algorithm;arbitrary sampling kernels;continuous-time Fourier coefficients;discrete-time samples;translational motion;viewpoint translation;registration techniques;Kernel;Approximation methods;Image resolution;Image registration;Registers;Robustness;Accuracy;Image registration;sampling methods;super-resolution;translational motion},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925475.pdf},\n}\n\n
\n
\n\n\n
\n Accurate registration is critical to most multi-channel signal processing setups, including image super-resolution. In this paper we use modern sampling theory to propose a new robust registration algorithm that works with arbitrary sampling kernels. The algorithm accurately approximates continuous-time Fourier coefficients from discrete-time samples. These Fourier coefficients can be used to construct an over-complete system, which can be solved to approximate translational motion at around 100-th of a pixel accuracy. The over-completeness of the system provides robustness to noise and other modelling errors. For example we show an image registration result for images that have slightly different backgrounds, due to a viewpoint translation. Our previous registration techniques, based on similar sampling theory, can provide a similar accuracy but not under these more general conditions. Simulation results demonstrate the accuracy and robustness of the approach and demonstrate the potential applications in image super-resolution.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Numerically stable estimation of scene flow independent of brightness and regularizer weights.\n \n \n \n \n\n\n \n Kameda, Y.; Matsuda, I.; and Itoh, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1068-1072, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"NumericallyPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952373,\n  author = {Y. Kameda and I. Matsuda and S. Itoh},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Numerically stable estimation of scene flow independent of brightness and regularizer weights},\n  year = {2014},\n  pages = {1068-1072},\n  abstract = {In video images, apparent motions can be computed using optical flow estimation. However, estimation of the depth directional velocity is difficult using only a single viewpoint. Scene flows (SF) are three-dimensional (3D) vector fields with apparent motion and a depth directional velocity field, which are computed from stereo video. The 3D motion of objects and a camera can be estimated using SF, thus it is used for obstacle detection and self-localization. SF estimation methods require the numerical computation of nonlinear equations to prevent over-smoothing due to the regularization of SF. Since the numerical stability depends on the image and regularizer weights, it is impossible to determine appropriate values for the weights. Thus, we propose a method that is independent of the images and weights, which simplifies previous methods and derives the numerical stability conditions, thereby facilitating the estimation of suitable weights. We also evaluated the performance of the proposed method.},\n  keywords = {image motion analysis;image sequences;nonlinear equations;video signal processing;optical flow estimation;video images;depth directional velocity;single viewpoint;scene flows;three-dimensional vector fields;obstacle detection;self-localization;SF estimation methods;nonlinear equations;numerical stability;Estimation;Optical imaging;Brightness;Numerical stability;Three-dimensional displays;Cameras;Nonlinear optics;Disparity;numerical stability;scene flow;stereo;variational method},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925129.pdf},\n}\n\n
\n
\n\n\n
\n In video images, apparent motions can be computed using optical flow estimation. However, estimation of the depth directional velocity is difficult using only a single viewpoint. Scene flows (SF) are three-dimensional (3D) vector fields with apparent motion and a depth directional velocity field, which are computed from stereo video. The 3D motion of objects and a camera can be estimated using SF, thus it is used for obstacle detection and self-localization. SF estimation methods require the numerical computation of nonlinear equations to prevent over-smoothing due to the regularization of SF. Since the numerical stability depends on the image and regularizer weights, it is impossible to determine appropriate values for the weights. Thus, we propose a method that is independent of the images and weights, which simplifies previous methods and derives the numerical stability conditions, thereby facilitating the estimation of suitable weights. We also evaluated the performance of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online learning partial least squares regression model for univariate response data.\n \n \n \n \n\n\n \n Qin, L.; Snoussi, H.; and Abdallah, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1073-1077, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OnlinePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952374,\n  author = {L. Qin and H. Snoussi and F. Abdallah},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Online learning partial least squares regression model for univariate response data},\n  year = {2014},\n  pages = {1073-1077},\n  abstract = {Partial least squares (PLS) analysis has attracted increasing attentions in image and video processing. Currently, most applications employ batch-form PLS methods, which require maintaining previous training data and re-training the model when new observations are available. In this work, we propose a novel approach that is able to update the PLS model in an online fashion. The proposed approach has the appealing property of constant computational complexity and const space complexity. Two extensions are proposed as well. First, we extend the method to be able to update the model when some training samples are removed. Second, we develop a weighted version, where different weights can be assigned to the data blocks when updating the model. Experiments on real image data confirmed the effectiveness of the proposed methods.},\n  keywords = {computational complexity;image processing;learning (artificial intelligence);least squares approximations;online learning partial least squares regression model;univariate response data;image processing;video processing;computational complexity;space complexity;data blocks;Mathematical model;Computational modeling;Data models;Training;Matrix decomposition;Equations;Algorithm design and analysis;Partial Least Squares Analysis;image processing;online learning},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924781.pdf},\n}\n\n
\n
\n\n\n
\n Partial least squares (PLS) analysis has attracted increasing attentions in image and video processing. Currently, most applications employ batch-form PLS methods, which require maintaining previous training data and re-training the model when new observations are available. In this work, we propose a novel approach that is able to update the PLS model in an online fashion. The proposed approach has the appealing property of constant computational complexity and const space complexity. Two extensions are proposed as well. First, we extend the method to be able to update the model when some training samples are removed. Second, we develop a weighted version, where different weights can be assigned to the data blocks when updating the model. Experiments on real image data confirmed the effectiveness of the proposed methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Smoke detection using spatio-temporal analysis, motion modeling and dynamic texture recognition.\n \n \n \n \n\n\n \n Barmpoutis, P.; Dimitropoulos, K.; and Grammalidis, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1078-1082, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SmokePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952375,\n  author = {P. Barmpoutis and K. Dimitropoulos and N. Grammalidis},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Smoke detection using spatio-temporal analysis, motion modeling and dynamic texture recognition},\n  year = {2014},\n  pages = {1078-1082},\n  abstract = {In this paper, we propose a novel method for video-based smoke detection, which aims to discriminate smoke from smoke-colored moving objects by applying spatio-temporal analysis, smoke motion modeling and dynamic texture recognition. Initially, candidate smoke regions in a frame are identified using background subtraction and color analysis based on the HSV model. Subsequently, spatio-temporal smoke modeling consisting of spatial energy analysis and spatio-temporal energy analysis is applied in the candidate regions. In addition, histograms of oriented gradients and optical flows (HOGHOFs) are computed to take into account both appearance and motion information, while dynamic texture recognition is applied in each candidate region using linear dynamical systems and a bag of systems approach. Dynamic score combination by mean value is finally used to determine whether there is smoke or not in each candidate image region. Experimental results presented in the paper show the great potential of the proposed approach.},\n  keywords = {image colour analysis;image motion analysis;image recognition;image sequences;image texture;smoke detectors;spatio-temporal analysis;dynamic texture recognition;video-based smoke detection method;smoke-colored moving objects;smoke motion modeling;background subtraction;color analysis;HSV model;spatial energy analysis;spatio-temporal energy analysis;HOGHOFs;histograms of oriented gradients and optical flows;linear dynamical systems;dynamic score combination;mean value;candidate image region;bag of system approach;Dynamics;Analytical models;Image color analysis;Histograms;Training;Computational modeling;Feature extraction;Smoke detection;histograms of oriented gradients;histograms of oriented optical flow;dynamic textures analysis;spatio-temporal modeling},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925173.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a novel method for video-based smoke detection, which aims to discriminate smoke from smoke-colored moving objects by applying spatio-temporal analysis, smoke motion modeling and dynamic texture recognition. Initially, candidate smoke regions in a frame are identified using background subtraction and color analysis based on the HSV model. Subsequently, spatio-temporal smoke modeling consisting of spatial energy analysis and spatio-temporal energy analysis is applied in the candidate regions. In addition, histograms of oriented gradients and optical flows (HOGHOFs) are computed to take into account both appearance and motion information, while dynamic texture recognition is applied in each candidate region using linear dynamical systems and a bag of systems approach. Dynamic score combination by mean value is finally used to determine whether there is smoke or not in each candidate image region. Experimental results presented in the paper show the great potential of the proposed approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust hypothesis testing with squared Hellinger distance.\n \n \n \n \n\n\n \n Gül, G.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1083-1087, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952376,\n  author = {G. Gül and A. M. Zoubir},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Robust hypothesis testing with squared Hellinger distance},\n  year = {2014},\n  pages = {1083-1087},\n  abstract = {We extend an earlier work of the same authors, which proposes a minimax robust hypothesis testing strategy between two composite hypotheses based on a squared Hellinger distance. We show that without any further restrictions the former four non-linear equations in four parameters, that have to be solved to design the robust test, can be reduced to two equations in two parameters. Additionally, we show that the same equations can be combined into a single equation if the nominal probability density functions satisfy the symmetry condition. The parameters controlling the degree of robustness are bounded from above depending on the nominal distributions and shown to be determined via solving a polynomial equation of degree two. Experiments justify the benefits of the proposed contributions.},\n  keywords = {probability;statistical testing;squared Hellinger distance;minimax robust hypothesis testing strategy;composite hypothesis;nonlinear equations;probability density function;symmetry condition;Equations;Robustness;Mathematical model;Testing;Probability distribution;Complexity theory;Probability density function;Detection;hypothesis testing;robustness},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926589.pdf},\n}\n\n
\n
\n\n\n
\n We extend an earlier work of the same authors, which proposes a minimax robust hypothesis testing strategy between two composite hypotheses based on a squared Hellinger distance. We show that without any further restrictions the former four non-linear equations in four parameters, that have to be solved to design the robust test, can be reduced to two equations in two parameters. Additionally, we show that the same equations can be combined into a single equation if the nominal probability density functions satisfy the symmetry condition. The parameters controlling the degree of robustness are bounded from above depending on the nominal distributions and shown to be determined via solving a polynomial equation of degree two. Experiments justify the benefits of the proposed contributions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Exploiting time and frequency information for Delay/Doppler altimetry.\n \n \n \n \n\n\n \n Halimi, A.; Mailhes, C.; Tourneret, J. -.; Moreau, T.; and Boy, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1088-1092, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ExploitingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952377,\n  author = {A. Halimi and C. Mailhes and J. -. Tourneret and T. Moreau and F. Boy},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Exploiting time and frequency information for Delay/Doppler altimetry},\n  year = {2014},\n  pages = {1088-1092},\n  abstract = {Delay/Doppler radar altimetry is a new technology that has been receiving an increasing interest, especially since the launch of Cryosat-2 in 2010, the first altimeter using this technique. The Delay/Doppler technique aims at reducing the measurement noise and increasing the along-track resolution in comparison with conventional pulse limited altimetry. A new semi-analytical model with five parameters has been recently introduced for this new technology. However, two of these parameters are highly correlated resulting in bad estimation performance when estimating all parameters. This paper proposes a new strategy improving estimation performance for delay/Doppler altimetry. The proposed strategy exploits all the information contained in the delay/Doppler domain. A comparison with other classical algorithms (using the temporal samples only) allows to appreciate the gain in estimation performance obtained when using both temporal and Doppler data.},\n  keywords = {Doppler radar;height measurement;delay radar altimetry;Doppler radar altimetry;altimeter;temporal data;Doppler data;Doppler effect;Delays;Estimation;Altimetry;Antennas;Logic gates;Time-frequency analysis;SAR altimetry;delay/Doppler altimetry;least squares estimation;antenna mispointing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923317.pdf},\n}\n\n
\n
\n\n\n
\n Delay/Doppler radar altimetry is a new technology that has been receiving an increasing interest, especially since the launch of Cryosat-2 in 2010, the first altimeter using this technique. The Delay/Doppler technique aims at reducing the measurement noise and increasing the along-track resolution in comparison with conventional pulse limited altimetry. A new semi-analytical model with five parameters has been recently introduced for this new technology. However, two of these parameters are highly correlated resulting in bad estimation performance when estimating all parameters. This paper proposes a new strategy improving estimation performance for delay/Doppler altimetry. The proposed strategy exploits all the information contained in the delay/Doppler domain. A comparison with other classical algorithms (using the temporal samples only) allows to appreciate the gain in estimation performance obtained when using both temporal and Doppler data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Identification of power line outages.\n \n \n \n \n\n\n \n Maymon, S.; and Eldar, Y. C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1093-1097, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"IdentificationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952378,\n  author = {S. Maymon and Y. C. Eldar},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Identification of power line outages},\n  year = {2014},\n  pages = {1093-1097},\n  abstract = {This paper considers the problem of identifying power line outages throughout an electric interconnection based on changes in phasor angles observed at a limited number of buses. In existing approaches for solving the line outage identification problem the unobserved phasor angle data is ignored and identification is based on the observed phasor angles extracted from the data. We propose, instead, a least-squares approach for estimating the unobserved phasor angles, which is shown to yield a solution to the line outage identification problem that is equivalent to the solution obtained with existing approaches. This equivalence suggests an implementation of the solution to the line outage identification problem that is computationally more efficient than previous methods. A natural extension of the least-squares formulation leads to a generalization of the line outages identification problem in which the grid parameters are unknown.},\n  keywords = {least squares approximations;phasor measurement;power line outages identification;electric interconnection;least-squares approach;unobserved phasor angles;phasor measurement units;Vectors;Bismuth;Transmission line matrix methods;Mathematical model;Null space;Optimization;Data models;Power line outages;phasor measurement units;sparsity;compressive sampling},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926679.pdf},\n}\n\n
\n
\n\n\n
\n This paper considers the problem of identifying power line outages throughout an electric interconnection based on changes in phasor angles observed at a limited number of buses. In existing approaches for solving the line outage identification problem the unobserved phasor angle data is ignored and identification is based on the observed phasor angles extracted from the data. We propose, instead, a least-squares approach for estimating the unobserved phasor angles, which is shown to yield a solution to the line outage identification problem that is equivalent to the solution obtained with existing approaches. This equivalence suggests an implementation of the solution to the line outage identification problem that is computationally more efficient than previous methods. A natural extension of the least-squares formulation leads to a generalization of the line outages identification problem in which the grid parameters are unknown.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nonparametric density estimation with region-censored data.\n \n \n \n \n\n\n \n Bennani, Y.; Pronzato, L.; and Rendas, M. J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1098-1102, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"NonparametricPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952379,\n  author = {Y. Bennani and L. Pronzato and M. J. Rendas},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Nonparametric density estimation with region-censored data},\n  year = {2014},\n  pages = {1098-1102},\n  abstract = {The paper proposes a new Maximum Entropy estimator for non-parametric density estimation from region censored observations in the context of population studies, where standard Maximum Likelihood is affected by over-fitting and non-uniqueness problems. The link between Maximum Entropy and Maximum Likelihood estimation for the exponential family has often been invoked in the literature. When, as it is the case for censored observations, the constraints on the Maximum Entropy estimator are derived from independent observations of a set of non-linear functions, this link is lost increasing the difference between the two criteria. By combining the two criteria we propose a novel density estimator that is able to overcome the singularities of the Maximum Likelihood estimator while maintaining a good fit to the observed data, and illustrate its behavior in real data (hyperbaric diving).},\n  keywords = {maximum entropy methods;maximum likelihood estimation;nonparametric statistics;hyperbaric diving;nonlinear functions;maximum likelihood estimation;maximum entropy estimation;region-censored data;nonparametric density estimation;Entropy;Sociology;Maximum likelihood estimation;Algorithm design and analysis;Context;Censored observations;non-parametric maximum likelihood;constrained maxent;regularisation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925031.pdf},\n}\n\n
\n
\n\n\n
\n The paper proposes a new Maximum Entropy estimator for non-parametric density estimation from region censored observations in the context of population studies, where standard Maximum Likelihood is affected by over-fitting and non-uniqueness problems. The link between Maximum Entropy and Maximum Likelihood estimation for the exponential family has often been invoked in the literature. When, as it is the case for censored observations, the constraints on the Maximum Entropy estimator are derived from independent observations of a set of non-linear functions, this link is lost increasing the difference between the two criteria. By combining the two criteria we propose a novel density estimator that is able to overcome the singularities of the Maximum Likelihood estimator while maintaining a good fit to the observed data, and illustrate its behavior in real data (hyperbaric diving).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Uniformly most powerful detection for integrate and fire time encoding.\n \n \n \n \n\n\n \n Fillatre, L.; Nikiforov, I.; Antonini, M.; and Atto, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1103-1107, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"UniformlyPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952380,\n  author = {L. Fillatre and I. Nikiforov and M. Antonini and A. Atto},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Uniformly most powerful detection for integrate and fire time encoding},\n  year = {2014},\n  pages = {1103-1107},\n  abstract = {A time encoding of a random signal is a representation of this signal as a random sequence of strictly increasing times. The goal of this paper is the rule for testing the mean value of a Gaussian signal from asynchronous samples given by the Integrate and Fire (IF) time encoding. The optimal likelihood ratio test is calculated and its statistical performance is compared with a synchronous test which is based on regular samples of the Gaussian signal. Since the IF samples based detector takes a decision at a random time, the regular samples based test exploits a random number of samples. The time encoding significantly reduces the number of samples needed to satisfy a prescribed probability of detection.},\n  keywords = {Gaussian processes;signal representation;random signal;signal representation;Gaussian signal;integrate and fire time encoding;IF time encoding;Encoding;Delays;Fires;Circuits and systems;Testing;Energy consumption;Probability;Likelihood ratio test;Time encoding;Random sampling;Integrate and fire sampler},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925395.pdf},\n}\n\n
\n
\n\n\n
\n A time encoding of a random signal is a representation of this signal as a random sequence of strictly increasing times. The goal of this paper is the rule for testing the mean value of a Gaussian signal from asynchronous samples given by the Integrate and Fire (IF) time encoding. The optimal likelihood ratio test is calculated and its statistical performance is compared with a synchronous test which is based on regular samples of the Gaussian signal. Since the IF samples based detector takes a decision at a random time, the regular samples based test exploits a random number of samples. The time encoding significantly reduces the number of samples needed to satisfy a prescribed probability of detection.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A geometrical approach to room compensation for sound field rendering applications.\n \n \n \n \n\n\n \n Canclini, A.; Marković, D.; Bianchi, L.; Antonacci, F.; Sarti, A.; and Tubaro, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1108-1112, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952381,\n  author = {A. Canclini and D. Marković and L. Bianchi and F. Antonacci and A. Sarti and S. Tubaro},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A geometrical approach to room compensation for sound field rendering applications},\n  year = {2014},\n  pages = {1108-1112},\n  abstract = {In this paper we propose a method for reducing the impact of room reflections in sound field rendering applications. Our method is based on the modeling of the acoustic paths (direct and reflected) from each of the loudspeakers of the rendering system, and a set of control points in the listening area. From such models we derive a propagation matrix and compute its least-squares inversion. Due to its relevant impact on the spatial impression, we focus on the early reflections part of the Room Impulse Response, which is conveniently estimated using the fast beam tracing modeling engine. A least squares problem is formulated in order to derive the compensation filter. We also demonstrate the robustness of the proposed solution against errors in geometric measurement of the hosting environment.},\n  keywords = {acoustic field;least squares approximations;loudspeakers;transient response;sound field rendering applications;geometrical approach;room compensation;room reflections;loudspeakers;control points;listening area;propagation matrix;least squares inversion;spatial impression;room impulse response;fast beam tracing modeling engine;compensation filter;Rendering (computer graphics);Acoustics;Loudspeakers;Attenuation;Robustness;Arrays;Computational modeling;Soundfield rendering;geometrical acoustics;room reflections;room compensation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921737.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we propose a method for reducing the impact of room reflections in sound field rendering applications. Our method is based on the modeling of the acoustic paths (direct and reflected) from each of the loudspeakers of the rendering system, and a set of control points in the listening area. From such models we derive a propagation matrix and compute its least-squares inversion. Due to its relevant impact on the spatial impression, we focus on the early reflections part of the Room Impulse Response, which is conveniently estimated using the fast beam tracing modeling engine. A least squares problem is formulated in order to derive the compensation filter. We also demonstrate the robustness of the proposed solution against errors in geometric measurement of the hosting environment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive stabilization of electro-dynamical transducers.\n \n \n \n \n\n\n \n Klippel, W.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1113-1117, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952382,\n  author = {W. Klippel},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive stabilization of electro-dynamical transducers},\n  year = {2014},\n  pages = {1113-1117},\n  abstract = {A new control technique for electro-dynamical transducer is presented which stabilizes the voice coil position, compensates for nonlinear distortion and generates a desired transfer response by preprocessing the electrical input signal. The control law is derived from transducer modeling using lumped elements and identifies all free parameters of the model by monitoring the electrical signals at the transducer terminals. The control system stays operative for any stimulus including music and other audio signals. The active stabilization is important for small loudspeakers generating the acoustical output at maximum efficiency.},\n  keywords = {audio signal processing;loudspeakers;music;adaptive stabilization;electro-dynamical transducers;control technique;voice coil position;nonlinear distortion;transfer response;electrical input signal;lumped elements;music;audio signals;active stabilization;small loudspeakers;Transducers;Coils;Vectors;Loudspeakers;Suspensions;Detectors;Control systems;loudspeaker;nonlinear adaptive control;stability;bifurcation;DC displacement},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922277.pdf},\n}\n\n
\n
\n\n\n
\n A new control technique for electro-dynamical transducer is presented which stabilizes the voice coil position, compensates for nonlinear distortion and generates a desired transfer response by preprocessing the electrical input signal. The control law is derived from transducer modeling using lumped elements and identifies all free parameters of the model by monitoring the electrical signals at the transducer terminals. The control system stays operative for any stimulus including music and other audio signals. The active stabilization is important for small loudspeakers generating the acoustical output at maximum efficiency.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Breaking down the cocktail party: Capturing and isolating sources in a soundscape.\n \n \n \n \n\n\n \n Alexandridis, A.; Griffin, A.; and Mouchtaris, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1118-1122, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"BreakingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952383,\n  author = {A. Alexandridis and A. Griffin and A. Mouchtaris},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Breaking down the cocktail party: Capturing and isolating sources in a soundscape},\n  year = {2014},\n  pages = {1118-1122},\n  abstract = {Spatial scene capture and reproduction requires extracting directional information from captured signals. Our previous work focused on directional coding of a sound scene using a single microphone array. In this paper, we investigate the benefits of using multiple microphone arrays, and extend our previous method by allowing arrays to cooperate during spatial feature extraction. We can thus render the sound scene using both direction and distance information and selectively reproduce specific “spots” of the captured sound scene.},\n  keywords = {microphone arrays;cocktail party;soundscape;isolating sources;capturing sources;spatial scene capture;spatial scene reproduction;directional coding;single microphone array;multiple microphone arrays;spatial feature extraction;Arrays;Microphones;Direction-of-arrival estimation;Speech;Array signal processing;Encoding;Acoustics;Microphone array;beamforming;source separation;spatial audio;sensor network},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925473.pdf},\n}\n\n
\n
\n\n\n
\n Spatial scene capture and reproduction requires extracting directional information from captured signals. Our previous work focused on directional coding of a sound scene using a single microphone array. In this paper, we investigate the benefits of using multiple microphone arrays, and extend our previous method by allowing arrays to cooperate during spatial feature extraction. We can thus render the sound scene using both direction and distance information and selectively reproduce specific “spots” of the captured sound scene.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An allpass hear-through headset.\n \n \n \n \n\n\n \n Rämö, J.; and Välimäki, V.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1123-1127, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952384,\n  author = {J. Rämö and V. Välimäki},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {An allpass hear-through headset},\n  year = {2014},\n  pages = {1123-1127},\n  abstract = {In order to create a natural hear-through experience when wearing the headset, the acoustic attenuation of the headset itself must be cancelled. This is obtained by processing the ambient sound signals captured by external microphones. The sound perceived by the headset user will then be a mixture of the ambient sound that leaks through the headset and the processed ambient sound that is reproduced with the headset. We propose a new equalization method for designing such a hear-through system based on an allpass design principle. The proposed method takes the frequency-dependent isolation transfer function of the headset as the input and completes it with an engineered transfer function so that the outcome will be an allpass transfer function with a flat magnitude response.},\n  keywords = {acoustic signal processing;all-pass filters;audio systems;frequency-dependent isolation transfer function;hear-through system;equalization method;ambient sound signals;acoustic attenuation;allpass hear-through headset;Headphones;Attenuation;Delays;Digital signal processing;Noise;Ear;Finite impulse response filters;Acoustic signal processing;audio systems;digital filters;FIR filters},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925181.pdf},\n}\n\n
\n
\n\n\n
\n In order to create a natural hear-through experience when wearing the headset, the acoustic attenuation of the headset itself must be cancelled. This is obtained by processing the ambient sound signals captured by external microphones. The sound perceived by the headset user will then be a mixture of the ambient sound that leaks through the headset and the processed ambient sound that is reproduced with the headset. We propose a new equalization method for designing such a hear-through system based on an allpass design principle. The proposed method takes the frequency-dependent isolation transfer function of the headset as the input and completes it with an engineered transfer function so that the outcome will be an allpass transfer function with a flat magnitude response.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An advanced spatial sound reproduction system with listener position tracking.\n \n \n \n \n\n\n \n Cecchi, S.; Primavera, A.; Virgulti, M.; Bettarelli, F.; and Piazza, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1128-1132, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952385,\n  author = {S. Cecchi and A. Primavera and M. Virgulti and F. Bettarelli and F. Piazza},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {An advanced spatial sound reproduction system with listener position tracking},\n  year = {2014},\n  pages = {1128-1132},\n  abstract = {The paper deals with the development of a real time system for the reproduction of an immersive audio field considering the listeners' position. The system is composed of two parts: a sound rendering system based on a crosstalk canceller that is required in order to have a spatialized reproduction and a listener position tracking system in order to model the crosstalk canceller parameters. Therefore, starting from the free-field model, a new model is considered introducing a directivity function for the loudspeakers and considering a three-dimensional environment. A real time application is proposed introducing a Kinect control, capable of accurately tracking the listener position and changing the crosstalk parameters. Several results are presented comparing the proposed approach with the state of the art in order to confirm its validity.},\n  keywords = {audio signal processing;crosstalk;loudspeakers;real-time systems;rendering (computer graphics);advanced spatial sound reproduction system;listener position tracking system;immersive audio field reproduction;real time system development;sound rendering system;crosstalk canceller parameters;free-field model;directivity function;loudspeakers;Kinect control;Loudspeakers;Crosstalk;Real-time systems;Ear;Radar tracking;Rendering (computer graphics);Immersive audio system;Crosstalk cancellation;Head tracking},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569918285.pdf},\n}\n\n
\n
\n\n\n
\n The paper deals with the development of a real time system for the reproduction of an immersive audio field considering the listeners' position. The system is composed of two parts: a sound rendering system based on a crosstalk canceller that is required in order to have a spatialized reproduction and a listener position tracking system in order to model the crosstalk canceller parameters. Therefore, starting from the free-field model, a new model is considered introducing a directivity function for the loudspeakers and considering a three-dimensional environment. A real time application is proposed introducing a Kinect control, capable of accurately tracking the listener position and changing the crosstalk parameters. Several results are presented comparing the proposed approach with the state of the art in order to confirm its validity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Physical layer network coding: An outage analysis in cellular network.\n \n \n \n \n\n\n \n Fukui, H.; Popovski, P.; and Yomo, H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1133-1137, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PhysicalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952386,\n  author = {H. Fukui and P. Popovski and H. Yomo},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Physical layer network coding: An outage analysis in cellular network},\n  year = {2014},\n  pages = {1133-1137},\n  abstract = {Physical layer network coding (PLNC) has been proposed to improve throughput of the two-way relay channel, where two nodes communicate with each other, being assisted by a relay node. Most of the works related to PLNC are focused on a simple three-node model and they do not take into account the impact of interference from other transmissions. Unlike these conventional studies, in this paper, we apply PLNC to a large-scale cellular network in the presence of intercell interference (ICI). In cellular networks, a terminal and a Base Station (BS) have different transmission power, which causes different impact of ICI on downlink (DL) and uplink (UL) phase. We theoretically derive outage probability with a tractable approach based on stochastic geometry which accurately models ICI. Moreover, we compare the performance of PLNC with Direct and conventional Relay scheme. With the obtained numerical results, we discuss how the interference and the difference of transmission power affect outage probability achieved by PLNC.},\n  keywords = {cellular radio;channel coding;network coding;radiofrequency interference;relay networks (telecommunication);physical layer network coding;PLNC;two-way relay channel throughput;three-node model;interference impact;large-scale cellular network;base station;BS;transmission power;uplink phase;UL phase;downlink phase;DL phase;ICI;inintercell interference;stochastic geometry;outage probability analysis;relay scheme;Relays;Interference;Network coding;Signal to noise ratio;Wireless networks;Random variables},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922793.pdf},\n}\n\n
\n
\n\n\n
\n Physical layer network coding (PLNC) has been proposed to improve throughput of the two-way relay channel, where two nodes communicate with each other, being assisted by a relay node. Most of the works related to PLNC are focused on a simple three-node model and they do not take into account the impact of interference from other transmissions. Unlike these conventional studies, in this paper, we apply PLNC to a large-scale cellular network in the presence of intercell interference (ICI). In cellular networks, a terminal and a Base Station (BS) have different transmission power, which causes different impact of ICI on downlink (DL) and uplink (UL) phase. We theoretically derive outage probability with a tractable approach based on stochastic geometry which accurately models ICI. Moreover, we compare the performance of PLNC with Direct and conventional Relay scheme. With the obtained numerical results, we discuss how the interference and the difference of transmission power affect outage probability achieved by PLNC.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A full-cooperative diversity beamforming scheme in two-way amplified-and-forward relay systems.\n \n \n \n\n\n \n Zhao, Z.; Ding, Z.; Peng, M.; and Poor, H. V.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1138-1142, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952387,\n  author = {Z. Zhao and Z. Ding and M. Peng and H. V. Poor},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A full-cooperative diversity beamforming scheme in two-way amplified-and-forward relay systems},\n  year = {2014},\n  pages = {1138-1142},\n  abstract = {Consider a simple two-way relaying channel in which two single-antenna sources exchange information via a multiple-antenna relay. For such a scenario, all the existing approaches that can achieve full cooperative diversity order are based on antenna/relay selection, for which the difficulty in designing the beamforming lies in the fact that a single beamformer needs to serve two destinations. In this paper, a new full-cooperative diversity beamforming scheme that ensures that the relay signals are coherently combined at both destinations is proposed. Both analytical and numerical results are provided to demonstrate the performance gains.},\n  keywords = {amplify and forward communication;cooperative communication;diversity reception;relay networks (telecommunication);full-cooperative diversity beamforming scheme;two-way amplified-and-forward relay systems;relay selection;antenna selection;relay signals;performance gains;single-antenna sources;multiple-antenna relay;Educational institutions;Abstracts;Antennas;Protocols;Indexes;Array signal processing;Integrated circuits;Two-way relay systems;beamforming;network coding;amplify-and-forward;cooperative diversity},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Consider a simple two-way relaying channel in which two single-antenna sources exchange information via a multiple-antenna relay. For such a scenario, all the existing approaches that can achieve full cooperative diversity order are based on antenna/relay selection, for which the difficulty in designing the beamforming lies in the fact that a single beamformer needs to serve two destinations. In this paper, a new full-cooperative diversity beamforming scheme that ensures that the relay signals are coherently combined at both destinations is proposed. Both analytical and numerical results are provided to demonstrate the performance gains.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive broadcast transmission in distributed two-way relaying networks.\n \n \n \n \n\n\n \n Wübben, D.; Wu, M.; and Dekorsy, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1143-1147, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952388,\n  author = {D. Wübben and M. Wu and A. Dekorsy},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive broadcast transmission in distributed two-way relaying networks},\n  year = {2014},\n  pages = {1143-1147},\n  abstract = {In this paper we consider adaptive, distributed two-way relaying networks using physical-layer network coding (PLNC). In the multiple-access (MA) phase, two sources transmit simultaneously to multiple relays. Depending on the decoding success at the relays, adaptive transmission schemes are investigated to avoid error propagation in the broadcast (BC) phase employing distributed orthogonal space-time block codes (D-OSTBCs). Recently, adaptive schemes have been proposed, where only relays with correct estimates of the network coded message participate in the BC transmission. In this work, we extend the analysis by incorporating also the case, that some relays are able to detect only one source message and propose a corresponding modified adaptive transmission scheme. For performance evaluations we resort to a semi-analytical method in order to examine the outage behavior of the presented schemes. As demonstrated by link-level simulations, the proposed adaptive scheme outperforms the traditional scheme significantly, especially for asymmetric network topology.},\n  keywords = {adaptive codes;decoding;multi-access systems;network coding;orthogonal codes;relay networks (telecommunication);space-time block codes;adaptive broadcast transmission scheme;adaptive distributed two-way relaying networks;physical-layer network coding;PLNC;multiple-access phase;MA;multiple relays;decoding;error propagation avoidance;broadcast phase;BC phase;distributed orthogonal space-time block codes;D-OSTBCs;source message;performance evaluations;semianalytical method;link-level simulations;asymmetric network topology;Relays;Decoding;Adaptive systems;Signal to noise ratio;Adaptation models;Network topology;Network coding},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922325.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we consider adaptive, distributed two-way relaying networks using physical-layer network coding (PLNC). In the multiple-access (MA) phase, two sources transmit simultaneously to multiple relays. Depending on the decoding success at the relays, adaptive transmission schemes are investigated to avoid error propagation in the broadcast (BC) phase employing distributed orthogonal space-time block codes (D-OSTBCs). Recently, adaptive schemes have been proposed, where only relays with correct estimates of the network coded message participate in the BC transmission. In this work, we extend the analysis by incorporating also the case, that some relays are able to detect only one source message and propose a corresponding modified adaptive transmission scheme. For performance evaluations we resort to a semi-analytical method in order to examine the outage behavior of the presented schemes. As demonstrated by link-level simulations, the proposed adaptive scheme outperforms the traditional scheme significantly, especially for asymmetric network topology.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Lattice network coding over euclidean domains.\n \n \n \n \n\n\n \n Vázquez-Castro, M. A.; and Oggier, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1148-1152, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"LatticePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952389,\n  author = {M. A. Vázquez-Castro and F. Oggier},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Lattice network coding over euclidean domains},\n  year = {2014},\n  pages = {1148-1152},\n  abstract = {We propose a novel approach to design and analyse lattice-based network coding. The underlying alphabets are carved from (quadratic imaginary) Euclidean domains with a known Euclidean division algorithm, due to their inherent algorithmical ability to capture analog network coding computations. These alphabets are used to embed linear p-ary codes of length n, p a prime, into n-dimensional Euclidean ambient spaces, via a variation of the so-called Construction A of lattices from linear codes. A study case over one such Euclidean domain is presented and the nominal coding gain of lattices obtained from p-ary Hamming codes is computed for any prime p such that p ≡ 1 (mod 4).},\n  keywords = {geometry;Hamming codes;lattice theory;linear codes;network coding;p-ary Hamming codes;n-dimensional Euclidean ambient spaces;linear p-ary codes;analog network coding computations;Euclidean division algorithm;quadratic imaginary;Euclidean domains;lattice-based network coding;Lattices;Network coding;Physical layer;Linear codes;Constellation diagram;Vectors;Euclidean Domains;Network Coding;Lattices},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924275.pdf},\n}\n\n
\n
\n\n\n
\n We propose a novel approach to design and analyse lattice-based network coding. The underlying alphabets are carved from (quadratic imaginary) Euclidean domains with a known Euclidean division algorithm, due to their inherent algorithmical ability to capture analog network coding computations. These alphabets are used to embed linear p-ary codes of length n, p a prime, into n-dimensional Euclidean ambient spaces, via a variation of the so-called Construction A of lattices from linear codes. A study case over one such Euclidean domain is presented and the nominal coding gain of lattices obtained from p-ary Hamming codes is computed for any prime p such that p ≡ 1 (mod 4).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Linear physical layer network coding for multihop wireless networks.\n \n \n \n \n\n\n \n Burr, A.; and Fang, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1153-1157, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"LinearPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952390,\n  author = {A. Burr and D. Fang},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Linear physical layer network coding for multihop wireless networks},\n  year = {2014},\n  pages = {1153-1157},\n  abstract = {We consider linear network coding functions that can be employed at the relays in wireless physical layer network coding, applied to a general multi-hop network topology. We introduce a general model of such a network, and discuss the algebraic basis of linear functions, deriving conditions for unambiguous decodability of the source data at the destination. We consider the use of integer rings, integer fields, binary extension fields and the ring of binary matrices as potential algebraic constructs, and show that the ring constructs provide more flexibility. We use the two-way relay channel and a network containing two sources and two relays to illustrate the concept and to demonstrate the effect of fading of the wireless channels. We show the capacity benefits of the more flexible rings.},\n  keywords = {fading channels;linear algebra;network coding;relay networks (telecommunication);telecommunication network topology;algebraic frameworks;wireless channel fading;two-way relay channel;binary matrices;binary extension fields;integer fields;integer rings;source data decodability;linear functions;multihop network topology;wireless physical layer network coding;multihop wireless networks;linear physical layer network coding;Relays;Network coding;Network topology;Wireless communication;Physical layer;Fading;Vectors;Physical layer network coding (PNC);linear algebra;rings},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924763.pdf},\n}\n\n
\n
\n\n\n
\n We consider linear network coding functions that can be employed at the relays in wireless physical layer network coding, applied to a general multi-hop network topology. We introduce a general model of such a network, and discuss the algebraic basis of linear functions, deriving conditions for unambiguous decodability of the source data at the destination. We consider the use of integer rings, integer fields, binary extension fields and the ring of binary matrices as potential algebraic constructs, and show that the ring constructs provide more flexibility. We use the two-way relay channel and a network containing two sources and two relays to illustrate the concept and to demonstrate the effect of fading of the wireless channels. We show the capacity benefits of the more flexible rings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparing initialisation methods for the Heuristic Memetic Clustering Algorithm.\n \n \n \n \n\n\n \n Craenen, B. G. W.; Ristaniemi, T.; and Nandi, A. K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1158-1162, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ComparingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952391,\n  author = {B. G. W. Craenen and T. Ristaniemi and A. K. Nandi},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Comparing initialisation methods for the Heuristic Memetic Clustering Algorithm},\n  year = {2014},\n  pages = {1158-1162},\n  abstract = {In this study we investigate the effect five initialisation methods from literature have on the performance of the Heuristic Memetic Clustering Algorithm (HMCA). The evaluation is based on an extensive experimental comparison on three benchmark datasets between HMCA and the commonly-used k-Medoids algorithm. Analysis of the experimental effectiveness and efficiency metrics confirms that the HMCA substantially outperforms k-Medoids, with the HMCA capable of finding bestter clusterings using substantially less computation effort. The Sample and Cluster initialisation methods were found to be the most suitable for the HMCA, with the results of the k-Medoids suggesting this to be the case for other algorithms as well.},\n  keywords = {learning (artificial intelligence);pattern clustering;heuristic memetic clustering algorithm;HMCA;three benchmark datasets;k-medoids algorithm;cluster initialisation methods;sample initialisation methods;machine learning;Clustering algorithms;Iris;Glass;Sociology;Statistics;Algorithm design and analysis;Heuristic algorithms;Machine Learning;Memetic Algorithms;Clustering;Heuristics},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569910321.pdf},\n}\n\n
\n
\n\n\n
\n In this study we investigate the effect five initialisation methods from literature have on the performance of the Heuristic Memetic Clustering Algorithm (HMCA). The evaluation is based on an extensive experimental comparison on three benchmark datasets between HMCA and the commonly-used k-Medoids algorithm. Analysis of the experimental effectiveness and efficiency metrics confirms that the HMCA substantially outperforms k-Medoids, with the HMCA capable of finding bestter clusterings using substantially less computation effort. The Sample and Cluster initialisation methods were found to be the most suitable for the HMCA, with the results of the k-Medoids suggesting this to be the case for other algorithms as well.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse matrix decompositions for clustering.\n \n \n \n \n\n\n \n Blumensath, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1163-1167, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952392,\n  author = {T. Blumensath},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse matrix decompositions for clustering},\n  year = {2014},\n  pages = {1163-1167},\n  abstract = {Clustering can be understood as a matrix decomposition problem, where a feature vector matrix is represented as a product of two matrices, a matrix of cluster centres and a matrix with sparse columns, where each column assigns individual features to one of the cluster centres. This matrix factorisation is the basis of classical clustering methods, such as those based on non-negative matrix factorisation but can also be derived for other methods, such as k-means clustering. In this paper we derive a new clustering method that combines some aspects of both, non-negative matrix factorisation and k-means clustering. We demonstrate empirically that the new approach outperforms other methods on a host of examples.},\n  keywords = {matrix decomposition;pattern clustering;sparse matrices;sparse matrix decompositions;matrix decomposition problem;clustering problem;feature vector matrix;matrix factorisation;nonnegative matrix factorisation;k-means clustering;Clustering algorithms;Vectors;Sparse matrices;Matrix decomposition;Noise;Standards;Convergence;Clustering;Low-Rank Matrix Approximation;Sparsity;Brain Imaging},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921631.pdf},\n}\n\n
\n
\n\n\n
\n Clustering can be understood as a matrix decomposition problem, where a feature vector matrix is represented as a product of two matrices, a matrix of cluster centres and a matrix with sparse columns, where each column assigns individual features to one of the cluster centres. This matrix factorisation is the basis of classical clustering methods, such as those based on non-negative matrix factorisation but can also be derived for other methods, such as k-means clustering. In this paper we derive a new clustering method that combines some aspects of both, non-negative matrix factorisation and k-means clustering. We demonstrate empirically that the new approach outperforms other methods on a host of examples.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Boosting the weights of positive words in image retrieval.\n \n \n \n \n\n\n \n Giouvanakis, E.; and Kotropoulos, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1168-1172, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"BoostingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952413,\n  author = {E. Giouvanakis and C. Kotropoulos},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Boosting the weights of positive words in image retrieval},\n  year = {2014},\n  pages = {1168-1172},\n  abstract = {In this paper, an image retrieval system based on the bag-of-words model is developed, which contains a novel query expansion technique. SIFT image features are computed using the Hessian-Affine keypoint detector. All feature descriptors are taken into account for the bag-of-words representation by dividing the full set of descriptors into a number of subsets. For each subset, a partial vocabulary is created and the final vocabulary is obtained by the union of the partial vocabularies. Here, a new discriminative query expansion technique is proposed in which an SVM classifier is trained in order to obtain a decision boundary between the top ranked and the bottom ranked images. Treating this boundary as a new query, words appearing exclusively in top-ranked images are further boosted by rewarding them with larger weights. The images are re-ranked with respect to the their distance from the new boosted query. It is proved that this strategy improves image retrieval performance.},\n  keywords = {image classification;image representation;image retrieval;support vector machines;transforms;positive words;image retrieval system;novel discriminative query expansion technique;SIFT image features;Hessian-Affine keypoint detector;bag-of-words representation model;partial vocabulary;SVM classifier;top-ranked images;Visualization;Vocabulary;Vectors;Computer vision;Image retrieval;Feature extraction;Support vector machines;image retrieval;bag-of-words;query-expansion},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923347.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, an image retrieval system based on the bag-of-words model is developed, which contains a novel query expansion technique. SIFT image features are computed using the Hessian-Affine keypoint detector. All feature descriptors are taken into account for the bag-of-words representation by dividing the full set of descriptors into a number of subsets. For each subset, a partial vocabulary is created and the final vocabulary is obtained by the union of the partial vocabularies. Here, a new discriminative query expansion technique is proposed in which an SVM classifier is trained in order to obtain a decision boundary between the top ranked and the bottom ranked images. Treating this boundary as a new query, words appearing exclusively in top-ranked images are further boosted by rewarding them with larger weights. The images are re-ranked with respect to the their distance from the new boosted query. It is proved that this strategy improves image retrieval performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gait feature selection in walker-assisted gait using NSGA-II and SVM hybrid algorithm.\n \n \n \n \n\n\n \n Martins, M.; Santos, C.; Costa, L.; and Frizera, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1173-1177, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"GaitPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952414,\n  author = {M. Martins and C. Santos and L. Costa and A. Frizera},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Gait feature selection in walker-assisted gait using NSGA-II and SVM hybrid algorithm},\n  year = {2014},\n  pages = {1173-1177},\n  abstract = {Nowadays, walkers are prescribed based on subjective standards that lead to incorrect indication of such devices to patients. This leads to the increase of dissatisfaction and occurrence of discomfort and fall events. Therefore, it is necessary to objectively evaluate the effects that walker can have on the gait patterns of its users, comparatively to non-assisted gait. A gait analysis, focusing on spatiotemporal and kinematics parameters, will be issued for this purpose. However, gait analysis yields redundant information and this study addresses this problem by selecting the most relevant gait features required to differentiate between assisted and non-assisted gait. In order to do this, it is proposed an approach that combines multi-objective genetic and support vector machine algorithms to discriminate differences. Results with healthy subjects have shown that the main differences are characterized by balance and joints excursion. Thus, one can conclude that this technique is an efficient feature selection approach.},\n  keywords = {gait analysis;genetic algorithms;medical computing;patient rehabilitation;support vector machines;gait feature selection;walker-assisted gait;NSGA-II;SVM hybrid algorithm;subjective standards;discomfort occurrence;fall events;gait patterns;gait analysis;spatiotemporal parameters;kinematics parameters;redundant information;multiobjective genetic algorithms;support vector machine algorithms;balance excursion;joints excursion;feature selection approach;Noise measurement;Sociology;Support vector machines;Hip;Correlation;Evolutionary algorithms;Walker-assisted gait;SVM;NSGA-II;Rehabilitation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924111.pdf},\n}\n\n
\n
\n\n\n
\n Nowadays, walkers are prescribed based on subjective standards that lead to incorrect indication of such devices to patients. This leads to the increase of dissatisfaction and occurrence of discomfort and fall events. Therefore, it is necessary to objectively evaluate the effects that walker can have on the gait patterns of its users, comparatively to non-assisted gait. A gait analysis, focusing on spatiotemporal and kinematics parameters, will be issued for this purpose. However, gait analysis yields redundant information and this study addresses this problem by selecting the most relevant gait features required to differentiate between assisted and non-assisted gait. In order to do this, it is proposed an approach that combines multi-objective genetic and support vector machine algorithms to discriminate differences. Results with healthy subjects have shown that the main differences are characterized by balance and joints excursion. Thus, one can conclude that this technique is an efficient feature selection approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multiclass Ridge-adjusted Slack Variable Optimization using selected basis for fast classification.\n \n \n \n \n\n\n \n Yu, Y.; Diamantaras, K. I.; McKelvey, T.; and Kung, S. Y.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1178-1182, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"MulticlassPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952415,\n  author = {Y. Yu and K. I. Diamantaras and T. McKelvey and S. Y. Kung},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Multiclass Ridge-adjusted Slack Variable Optimization using selected basis for fast classification},\n  year = {2014},\n  pages = {1178-1182},\n  abstract = {Kernel techniques for classification is especially challenging in terms of computation and memory requirement when data fall into more than two categories. In this paper, we extend a binary classification technique called Ridge-adjusted Slack Variable Optimization (RiSVO) to its multiclass counterpart where the label information encoding scheme allows the computational complexity to remain the same to the binary case. The main features of this technique are summarized as follows: (1) Only a subset of data are pre-selected to construct the basis for kernel computation; (2) Simultaneous active training set selection for all classes helps reduce complexity meanwhile improving robustness; (3) With the proposed active set selection criteria, inclusion property is verified empirically. Inclusion property means that once a pattern is excluded, it will no longer return to the active training set and therefore can be permanently removed from the training procedure. This property greatly reduce the complexity. The proposed techniques are evaluated on standard multiclass datasets MNIST, USPS, pendigits and letter which could be easily compared with existing results.},\n  keywords = {computational complexity;optimisation;signal classification;inclusion property;active training set selection;computational complexity;label information encoding scheme;binary classification technique;fast classification;multiclass RiSVO;ridge-adjusted slack variable optimization;Training;Kernel;Training data;Support vector machines;Vectors;Equations;Optimization;RiSVO;kernel;multiclass classification;large scale data;RKHS basis construction},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924411.pdf},\n}\n\n
\n
\n\n\n
\n Kernel techniques for classification is especially challenging in terms of computation and memory requirement when data fall into more than two categories. In this paper, we extend a binary classification technique called Ridge-adjusted Slack Variable Optimization (RiSVO) to its multiclass counterpart where the label information encoding scheme allows the computational complexity to remain the same to the binary case. The main features of this technique are summarized as follows: (1) Only a subset of data are pre-selected to construct the basis for kernel computation; (2) Simultaneous active training set selection for all classes helps reduce complexity meanwhile improving robustness; (3) With the proposed active set selection criteria, inclusion property is verified empirically. Inclusion property means that once a pattern is excluded, it will no longer return to the active training set and therefore can be permanently removed from the training procedure. This property greatly reduce the complexity. The proposed techniques are evaluated on standard multiclass datasets MNIST, USPS, pendigits and letter which could be easily compared with existing results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian classification and active learning using lp-priors. Application to image segmentation.\n \n \n \n \n\n\n \n Ruiz, P.; de la Blanca , N. P.; Molina, R.; and Katsaggelos, A. K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1183-1187, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952416,\n  author = {P. Ruiz and N. P. {de la Blanca} and R. Molina and A. K. Katsaggelos},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian classification and active learning using lp-priors. Application to image segmentation},\n  year = {2014},\n  pages = {1183-1187},\n  abstract = {In this paper we utilize Bayesian modeling and inference to learn a softmax classification model which performs Supervised Classification and Active Learning. For p <; 1, lp-priors are used to impose sparsity on the adaptive parameters. Using variational inference, all model parameters are estimated and the posterior probabilities of the classes given the samples are calculated. A relationship between the prior model used and the independent Gaussian prior model is provided. The posterior probabilities are used to classify new samples and to define two Active Learning methods to improve classifier performance: Minimum Probability and Maximum Entropy. In the experimental section the proposed Bayesian framework is applied to Image Segmentation problems on both synthetic and real datasets, showing higher accuracy than state-of-the-art approaches.},\n  keywords = {belief networks;image classification;image segmentation;learning (artificial intelligence);maximum entropy methods;probability;Bayesian classification;active learning method;image segmentation problem;softmax classification model;supervised classification;variational inference;posterior probabilities;independent Gaussian prior model;minimum probability;maximum entropy;Bayes methods;Training;Support vector machines;Adaptation models;Image segmentation;Vectors;Entropy},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924613.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we utilize Bayesian modeling and inference to learn a softmax classification model which performs Supervised Classification and Active Learning. For p <; 1, lp-priors are used to impose sparsity on the adaptive parameters. Using variational inference, all model parameters are estimated and the posterior probabilities of the classes given the samples are calculated. A relationship between the prior model used and the independent Gaussian prior model is provided. The posterior probabilities are used to classify new samples and to define two Active Learning methods to improve classifier performance: Minimum Probability and Maximum Entropy. In the experimental section the proposed Bayesian framework is applied to Image Segmentation problems on both synthetic and real datasets, showing higher accuracy than state-of-the-art approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Piecewise nonlinear regression via decision adaptive trees.\n \n \n \n \n\n\n \n Vanli, N. D.; Sayin, M. O.; Ergüt, S.; and Kozat, S. S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1188-1192, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PiecewisePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952417,\n  author = {N. D. Vanli and M. O. Sayin and S. Ergüt and S. S. Kozat},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Piecewise nonlinear regression via decision adaptive trees},\n  year = {2014},\n  pages = {1188-1192},\n  abstract = {We investigate the problem of adaptive nonlinear regression and introduce tree based piecewise linear regression algorithms that are highly efficient and provide significantly improved performance with guaranteed upper bounds in an individual sequence manner. We partition the regressor space using hyperplanes in a nested structure according to the notion of a tree. In this manner, we introduce an adaptive nonlinear regression algorithm that not only adapts the regressor of each partition but also learns the complete tree structure with a computational complexity only polynomial in the number of nodes of the tree. Our algorithm is constructed to directly minimize the final regression error without introducing any ad-hoc parameters. Moreover, our method can be readily incorporated with any tree construction method as demonstrated in the paper.},\n  keywords = {computational complexity;decision trees;piecewise linear techniques;regression analysis;piecewise nonlinear regression;decision adaptive trees;adaptive nonlinear regression;tree based piecewise linear regression algorithms;regressor space;hyperplanes;tree structure;computational complexity;ad-hoc parameters;tree construction method;regression error;Abstracts;Linear regression;Filtering algorithms;Three-dimensional displays;Radio access networks;Nonlinear regression;nonlinear adaptive filtering;adaptive;sequential;binary tree},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924773.pdf},\n}\n\n
\n
\n\n\n
\n We investigate the problem of adaptive nonlinear regression and introduce tree based piecewise linear regression algorithms that are highly efficient and provide significantly improved performance with guaranteed upper bounds in an individual sequence manner. We partition the regressor space using hyperplanes in a nested structure according to the notion of a tree. In this manner, we introduce an adaptive nonlinear regression algorithm that not only adapts the regressor of each partition but also learns the complete tree structure with a computational complexity only polynomial in the number of nodes of the tree. Our algorithm is constructed to directly minimize the final regression error without introducing any ad-hoc parameters. Moreover, our method can be readily incorporated with any tree construction method as demonstrated in the paper.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comprehensive lower bounds on sequential prediction.\n \n \n \n \n\n\n \n Vanli, N. D.; Sayin, M. O.; Ergüt, S.; and Kozat, S. S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1193-1196, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ComprehensivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952418,\n  author = {N. D. Vanli and M. O. Sayin and S. Ergüt and S. S. Kozat},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Comprehensive lower bounds on sequential prediction},\n  year = {2014},\n  pages = {1193-1196},\n  abstract = {We study the problem of sequential prediction of real-valued sequences under the squared error loss function. While refraining from any statistical and structural assumptions on the underlying sequence, we introduce a competitive approach to this problem and compare the performance of a sequential algorithm with respect to the large and continuous class of parametric predictors. We define the performance difference between a sequential algorithm and the best parametric predictor as “regret”, and introduce a guaranteed worst-case lower bounds to this relative performance measure. In particular, we prove that for any sequential algorithm, there always exists a sequence for which this regret is lower bounded by zero. We then extend this result by showing that the prediction problem can be transformed into a parameter estimation problem if the class of parametric predictors satisfy a certain property, and provide a comprehensive lower bound to this case.},\n  keywords = {functional analysis;prediction theory;sequences;comprehensive lower bound;sequential prediction;real valued sequence;squared error loss function;sequential algorithm;parametric prediction;guaranteed worst case lower bound;relative performance measure;Abstracts;Vectors;Erbium;Sequential prediction;lower bound;worst-case performance},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924775.pdf},\n}\n\n
\n
\n\n\n
\n We study the problem of sequential prediction of real-valued sequences under the squared error loss function. While refraining from any statistical and structural assumptions on the underlying sequence, we introduce a competitive approach to this problem and compare the performance of a sequential algorithm with respect to the large and continuous class of parametric predictors. We define the performance difference between a sequential algorithm and the best parametric predictor as “regret”, and introduce a guaranteed worst-case lower bounds to this relative performance measure. In particular, we prove that for any sequential algorithm, there always exists a sequence for which this regret is lower bounded by zero. We then extend this result by showing that the prediction problem can be transformed into a parameter estimation problem if the class of parametric predictors satisfy a certain property, and provide a comprehensive lower bound to this case.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the segmentation of switching autoregressive processes by nonparametric Bayesian methods.\n \n \n \n \n\n\n \n Dash, S.; and Djurić, P. M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1197-1201, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952419,\n  author = {S. Dash and P. M. Djurić},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {On the segmentation of switching autoregressive processes by nonparametric Bayesian methods},\n  year = {2014},\n  pages = {1197-1201},\n  abstract = {We demonstrate the use of a variant of the nonparametric Bayesian (NPB) forward-backward (FB) method for sampling state sequences of hidden Markov models (HMMs), when the continuous-valued observations follow autoregressive (AR) processes. The goal is to get an accurate representation of the posterior probability of the state-sequence configuration. The advantage of using NPB samplers towards this end is well-known; one need not specify (or heuristically estimate) the number of states present in the model. Instead one uses hierarchical Dirichlet processes (HDPs) as priors for the state-transition probabilities to account for a potentially infinite number of states. The FB algorithm is known to increase the mixing rate of such samplers (compared to direct Gibbs), but can still yield significant spread in segmentation error. We show that by approximately integrating out some parameters of the model, one can alleviate this problem considerably.},\n  keywords = {autoregressive processes;Bayes methods;hidden Markov models;nonparametric statistics;probability;sampling methods;switching autoregressive process segmentation;segmentation error;state-transition probability;HDPs;hierarchical Dirichlet processes;NPB samplers;state-sequence configuration;posterior probability representation;AR;continuous-valued observations;HMMs;hidden Markov models;state sequence sampling;FB method;forward-backward method;nonparametric Bayesian methods;Hidden Markov models;Markov processes;Switches;Time series analysis;Monte Carlo methods;Bayes methods;Probability density function;hidden Markov model;autoregressive process;segmentation;hierarchical Dirichlet process;Gibbs sampling;non-parametric Bayesian},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925291.pdf},\n}\n\n
\n
\n\n\n
\n We demonstrate the use of a variant of the nonparametric Bayesian (NPB) forward-backward (FB) method for sampling state sequences of hidden Markov models (HMMs), when the continuous-valued observations follow autoregressive (AR) processes. The goal is to get an accurate representation of the posterior probability of the state-sequence configuration. The advantage of using NPB samplers towards this end is well-known; one need not specify (or heuristically estimate) the number of states present in the model. Instead one uses hierarchical Dirichlet processes (HDPs) as priors for the state-transition probabilities to account for a potentially infinite number of states. The FB algorithm is known to increase the mixing rate of such samplers (compared to direct Gibbs), but can still yield significant spread in segmentation error. We show that by approximately integrating out some parameters of the model, one can alleviate this problem considerably.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint low-rank representation and matrix completion under a singular value thresholding framework.\n \n \n \n \n\n\n \n Tzagkarakis, C.; Becker, S.; and Mouchtaris, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1202-1206, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952420,\n  author = {C. Tzagkarakis and S. Becker and A. Mouchtaris},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Joint low-rank representation and matrix completion under a singular value thresholding framework},\n  year = {2014},\n  pages = {1202-1206},\n  abstract = {Matrix completion is the process of estimating missing entries from a matrix using some prior knowledge. Typically, the prior knowledge is that the matrix is low-rank. In this paper, we present an extension of standard matrix completion that leverages prior knowledge that the matrix is low-rank and that the data samples can be efficiently represented by a fixed known dictionary. Specifically, we compute a low-rank representation of a data matrix with respect to a given dictionary using only a few observed entries. A novel modified version of the singular value thresholding (SVT) algorithm named joint low-rank representation and matrix completion SVT (J-SVT) is proposed. Experiments on simulated data show that the proposed J-SVT algorithm provides better reconstruction results compared to standard matrix completion.},\n  keywords = {signal representation;singular value decomposition;joint low-rank representation;matrix completion;singular value thresholding framework;missing entry estimation;data samples;fixed known dictionary representation;data matrix;joint low-rank representation and matrix completion SVT algorithm;J-SVT algorithm;Dictionaries;Joints;Matrix decomposition;Robustness;Artificial intelligence;Sparse matrices;Signal to noise ratio;low-rank representation;matrix completion;singular value thresholding;dictionary representation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569911147.pdf},\n}\n\n
\n
\n\n\n
\n Matrix completion is the process of estimating missing entries from a matrix using some prior knowledge. Typically, the prior knowledge is that the matrix is low-rank. In this paper, we present an extension of standard matrix completion that leverages prior knowledge that the matrix is low-rank and that the data samples can be efficiently represented by a fixed known dictionary. Specifically, we compute a low-rank representation of a data matrix with respect to a given dictionary using only a few observed entries. A novel modified version of the singular value thresholding (SVT) algorithm named joint low-rank representation and matrix completion SVT (J-SVT) is proposed. Experiments on simulated data show that the proposed J-SVT algorithm provides better reconstruction results compared to standard matrix completion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Weight moment conditions for L4 convergence of particle filters for unbounded test functions.\n \n \n \n\n\n \n Mbalawata, I. S.; and Särkkä, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1207-1211, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952421,\n  author = {I. S. Mbalawata and S. Särkkä},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Weight moment conditions for L4 convergence of particle filters for unbounded test functions},\n  year = {2014},\n  pages = {1207-1211},\n  abstract = {Particle filters are important approximation methods for solving probabilistic optimal filtering problems on nonlinear non-Gaussian dynamical systems. In this paper, we derive novel moment conditions for importance weights of sequential Monte Carlo based particle filters, which ensure the L4 convergence of particle filter approximations of unbounded test functions. This paper extends the particle filter convergence results of Hu & Schön & Ljung (2008) and Mbalawata & Särkkä (2014) by allowing for a general class of potentially unbounded importance weights and hence more general importance distributions. The result shows that provided that the seventh order moment is finite, then a particle filter for unbounded test functions with unbounded importance weights are ensured to converge.},\n  keywords = {approximation theory;Monte Carlo methods;particle filtering (numerical methods);weight moment conditions;L4 convergence;unbounded test functions;probabilistic optimal filtering problems;nonlinear nonGaussian dynamical systems;sequential Monte Carlo based particle filters;particle filter approximations;general importance distributions;Convergence;Mathematical model;Monte Carlo methods;Bayes methods;Approximation methods;Equations;Atmospheric measurements;Particle filter convergence;unbounded importance weights;moment conditions},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Particle filters are important approximation methods for solving probabilistic optimal filtering problems on nonlinear non-Gaussian dynamical systems. In this paper, we derive novel moment conditions for importance weights of sequential Monte Carlo based particle filters, which ensure the L4 convergence of particle filter approximations of unbounded test functions. This paper extends the particle filter convergence results of Hu & Schön & Ljung (2008) and Mbalawata & Särkkä (2014) by allowing for a general class of potentially unbounded importance weights and hence more general importance distributions. The result shows that provided that the seventh order moment is finite, then a particle filter for unbounded test functions with unbounded importance weights are ensured to converge.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detrended fluctuation analysis for empirical mode decomposition based denoising.\n \n \n \n \n\n\n \n Mert, A.; and Akan, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1212-1216, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DetrendedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952422,\n  author = {A. Mert and A. Akan},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Detrended fluctuation analysis for empirical mode decomposition based denoising},\n  year = {2014},\n  pages = {1212-1216},\n  abstract = {Empirical mode decomposition (EMD) is a recently proposed method to analyze non-linear and non-stationary time series by decomposing them into intrinsic mode functions (IMFs). One of the most popular application of such a method is noise elimination. EMD based denoising methods require a robust threshold to determine which IMFs are noise related components. In this study, detrended fluctuation analysis (DFA) is suggested to obtain such a threshold. The scaling exponential obtained by the root mean squared fluctuation is capable of distinguishing uncorrelated white Gaussian noise and anti-correlated signals. Therefore, in our method the slope of the scaling exponent is used as the threshold for EMD based denoising. IMFs with lower slope than the threshold are assumed to be noisy oscillations and excluded in the reconstruction step. The proposed method is tested on various signal to noise ratios (SNR) to show its denoising performance and reliability compared to several other methods.},\n  keywords = {Gaussian noise;signal denoising;signal reconstruction;time series;denoising performance;reliability;SNR;signal-to-noise ratio;reconstruction step;scaling exponent slope;anticorrelated signals;uncorrelated white Gaussian noise;root mean-squared fluctuation;scaling exponential;DFA;noise-related component;robust threshold;EMD-based denoising method;noise elimination;IMF;intrinsic mode functions;nonstationary time series;nonlinear time series;empirical mode decomposition-based denoising;detrended fluctuation analysis;Noise reduction;Signal to noise ratio;Empirical mode decomposition;Noise measurement;Time series analysis;Electroencephalography;Empirical mode decomposition;Detrended fluctuation analysis;Denoising;Thresholding},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924289.pdf},\n}\n\n
\n
\n\n\n
\n Empirical mode decomposition (EMD) is a recently proposed method to analyze non-linear and non-stationary time series by decomposing them into intrinsic mode functions (IMFs). One of the most popular application of such a method is noise elimination. EMD based denoising methods require a robust threshold to determine which IMFs are noise related components. In this study, detrended fluctuation analysis (DFA) is suggested to obtain such a threshold. The scaling exponential obtained by the root mean squared fluctuation is capable of distinguishing uncorrelated white Gaussian noise and anti-correlated signals. Therefore, in our method the slope of the scaling exponent is used as the threshold for EMD based denoising. IMFs with lower slope than the threshold are assumed to be noisy oscillations and excluded in the reconstruction step. The proposed method is tested on various signal to noise ratios (SNR) to show its denoising performance and reliability compared to several other methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nonlinear system identification using constellation based multiple model adaptive estimators.\n \n \n \n \n\n\n \n Martins, J. C.; Caeiro, J. J.; and Sousa, L. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1217-1221, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"NonlinearPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952423,\n  author = {J. C. Martins and J. J. Caeiro and L. A. Sousa},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Nonlinear system identification using constellation based multiple model adaptive estimators},\n  year = {2014},\n  pages = {1217-1221},\n  abstract = {This paper describes the application of the constellation based multiple model adaptive estimation (CBMMAE) algorithm to the identification and parameter estimation of nonlinear systems. The method was successfully applied to the identification of linear systems both stationary and nonstationary, being able to fine tune its parameters. The method starts by establishing a minimum set of models that are geometrically arranged in the space spanned by the unknown parameters, and adopts a strategy to adaptively update the constellation models in the parameter space in order to find the model resembling the system under identification. By downscaling the models parameters the constellation is shrunk, reducing the uncertainty of the parameters estimation. Simulations are presented to exhibit the application of the framework and the performance of the algorithm to the identification and parameters estimation of nonlinear systems.},\n  keywords = {adaptive estimation;linear systems;nonlinear estimation;nonlinear systems;parameter estimation;state estimation;nonlinear system identification;constellation based multiple model adaptive estimators;CBMMAE algorithm;parameter estimation;linear system identification;parameter space;parameter identification;state estimation;Mathematical model;Adaptation models;Nonlinear systems;Vectors;Computational modeling;Equations;Noise;Dynamic systems identification;sub-optimal state estimation;multiple model adaptive estimator;parameter estimation;extended Kalman filter;unscented Kalman filter},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926179.pdf},\n}\n\n
\n
\n\n\n
\n This paper describes the application of the constellation based multiple model adaptive estimation (CBMMAE) algorithm to the identification and parameter estimation of nonlinear systems. The method was successfully applied to the identification of linear systems both stationary and nonstationary, being able to fine tune its parameters. The method starts by establishing a minimum set of models that are geometrically arranged in the space spanned by the unknown parameters, and adopts a strategy to adaptively update the constellation models in the parameter space in order to find the model resembling the system under identification. By downscaling the models parameters the constellation is shrunk, reducing the uncertainty of the parameters estimation. Simulations are presented to exhibit the application of the framework and the performance of the algorithm to the identification and parameters estimation of nonlinear systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Iterative Label Propagation on facial images.\n \n \n \n \n\n\n \n Zoidi, O.; Tefas, A.; Nikolaidis, N.; and Pitas, I.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1222-1226, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"IterativePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952424,\n  author = {O. Zoidi and A. Tefas and N. Nikolaidis and I. Pitas},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Iterative Label Propagation on facial images},\n  year = {2014},\n  pages = {1222-1226},\n  abstract = {In this paper a novel method is introduced for propagating person identity labels on facial images in an iterative manner. The proposed method takes into account information about the data structure, obtained through clustering. This information is exploited in two ways: to regulate the similarity strength between the data and to indicate which samples should be selected for label propagation initialization. The proposed method can also find application in label propagation on multiple graphs. The performance of the proposed Iterative Label Propagation (ILP) method was evaluated on facial images extracted from stereo movies. Experimental results showed that the proposed method outperforms state of the art methods either when only one or both video channels are used for label propagation.},\n  keywords = {data structures;face recognition;iterative methods;pattern clustering;stereo image processing;iterative label propagation;facial images;person identity label;data structure;pattern clustering;similarity strength;label propagation initialization;ILP method;stereo movies;video channels;Motion pictures;Accuracy;Iterative methods;Clustering algorithms;Data structures;Semantics;Visualization;label propagation;multi-graph label propagation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925907.pdf},\n}\n\n
\n
\n\n\n
\n In this paper a novel method is introduced for propagating person identity labels on facial images in an iterative manner. The proposed method takes into account information about the data structure, obtained through clustering. This information is exploited in two ways: to regulate the similarity strength between the data and to indicate which samples should be selected for label propagation initialization. The proposed method can also find application in label propagation on multiple graphs. The performance of the proposed Iterative Label Propagation (ILP) method was evaluated on facial images extracted from stereo movies. Experimental results showed that the proposed method outperforms state of the art methods either when only one or both video channels are used for label propagation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the use of artificial neural network to predict denoised speech quality.\n \n \n \n \n\n\n \n Ben Aicha, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1227-1231, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952425,\n  author = {A. {Ben Aicha}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {On the use of artificial neural network to predict denoised speech quality},\n  year = {2014},\n  pages = {1227-1231},\n  abstract = {Existing objective criteria for denoised speech assessment have as output one score indicating the quality of processed speech. Even it is well useful when it is about comparing denoised techniques between each others, they failed to give with enough accuracy an idea about the real corresponding Mean Opinion Score rate (MOS). In this paper, we propose a new methodology to estimate MOS score of denoised speech. Firstly, a statistical study of existed criteria based on boxplot and Principal Component Analysis (PCA) analysis yields to select the most relevant criteria. Then, an Artificial Neural Network (ANN) trained in selected objective criteria applied on the denoised speech is used. Unlike traditional criteria, the proposed method can give a significant objective score directly interpreted as an estimation of real MOS score. Experimental results show that the proposed method leads to more accurate estimation of the MOS score of the denoised speech.},\n  keywords = {neural nets;principal component analysis;signal denoising;speech processing;statistical study;ANN;PCA;principal component analysis;boxplot analysis;MOS score estimation;mean opinion score rate;denoised speech assessment techniques;objective criteria;denoised speech quality prediction;artificial neural network;Speech;Artificial neural networks;Speech enhancement;Speech coding;Signal to noise ratio;Distortion measurement;speech enhancement;speech assessment;MOS;ANN},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569907079.pdf},\n}\n\n
\n
\n\n\n
\n Existing objective criteria for denoised speech assessment have as output one score indicating the quality of processed speech. Even it is well useful when it is about comparing denoised techniques between each others, they failed to give with enough accuracy an idea about the real corresponding Mean Opinion Score rate (MOS). In this paper, we propose a new methodology to estimate MOS score of denoised speech. Firstly, a statistical study of existed criteria based on boxplot and Principal Component Analysis (PCA) analysis yields to select the most relevant criteria. Then, an Artificial Neural Network (ANN) trained in selected objective criteria applied on the denoised speech is used. Unlike traditional criteria, the proposed method can give a significant objective score directly interpreted as an estimation of real MOS score. Experimental results show that the proposed method leads to more accurate estimation of the MOS score of the denoised speech.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic recognition of wideband telephone speech with limited amount of matched training data.\n \n \n \n \n\n\n \n Bauer, P.; Abel, J.; Fischer, V.; and Fingscheidt, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1232-1236, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952426,\n  author = {P. Bauer and J. Abel and V. Fischer and T. Fingscheidt},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic recognition of wideband telephone speech with limited amount of matched training data},\n  year = {2014},\n  pages = {1232-1236},\n  abstract = {Automatic speech recognition (ASR) for wideband (WB) telephone speech services must cope with a lack of matching speech databases for acoustic model training. This paper investigates the impact of mixing insufficient WB and additional narrowband (NB) speech training data. It turns out that decimation and interpolation techniques, reducing the bandwidth mismatch between the NB speech material in training and the WB speech data to be recognized, do not succeed in outperforming the pure NB ASR baseline. However, true WB ASR training supported by artificial bandwidth extension (ABE) reveals a performance gain. A new ABE approach that makes use of robust dynamic features and a Viterbi path decoder exploiting phonetic a priori knowledge proves to be superior. It yields a reduction of 1.9 % word error rate relative to the NB ASR baseline and 9.3 % relative to a WB ASR experiment trained on only a limited amount of WB speech data.},\n  keywords = {interpolation;speech coding;speech recognition;telephony;Viterbi decoding;phonetic;word error rate;Viterbi path decoder;robust dynamic features;ABE approach;artificial bandwidth extension;true WB ASR training;WB speech data;NB speech material;bandwidth mismatch reduction;decimation techniques;interpolation techniques;narrowband speech training data;insufficient WB mixing;acoustic model training;speech database matching;wideband telephone speech services;matched training data;automatic speech recognition;Speech;Speech recognition;Niobium;Hidden Markov models;Training;Acoustics;Training data;bandwidth extension;speech recognition},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569910419.pdf},\n}\n\n
\n
\n\n\n
\n Automatic speech recognition (ASR) for wideband (WB) telephone speech services must cope with a lack of matching speech databases for acoustic model training. This paper investigates the impact of mixing insufficient WB and additional narrowband (NB) speech training data. It turns out that decimation and interpolation techniques, reducing the bandwidth mismatch between the NB speech material in training and the WB speech data to be recognized, do not succeed in outperforming the pure NB ASR baseline. However, true WB ASR training supported by artificial bandwidth extension (ABE) reveals a performance gain. A new ABE approach that makes use of robust dynamic features and a Viterbi path decoder exploiting phonetic a priori knowledge proves to be superior. It yields a reduction of 1.9 % word error rate relative to the NB ASR baseline and 9.3 % relative to a WB ASR experiment trained on only a limited amount of WB speech data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effect of MPEG audio compression on vocoders used in statistical parametric speech synthesis.\n \n \n \n \n\n\n \n Bollepalli, B.; and Raito, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1237-1241, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EffectPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952427,\n  author = {B. Bollepalli and T. Raito},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Effect of MPEG audio compression on vocoders used in statistical parametric speech synthesis},\n  year = {2014},\n  pages = {1237-1241},\n  abstract = {This paper investigates the effect of MPEG audio compression on HMM-based speech synthesis using two state-of-the-art vocoders. Speech signals are first encoded with various compression rates and analyzed using the GlottHMM and STRAIGHT vocoders. Objective evaluation results show that the parameters of both vocoders gradually degrade with increasing compression rates, but with a clear increase in degradation with bit-rates of 32 kbit/s or less. Experiments with HMM-based synthesis with the two vocoders show that the degradation in quality is already perceptible with bit-rates of 32 kbit/s and both vocoders show similar trend in degradation with respect to compression ratio. The most perceptible artefacts induced by the compression are spectral distortion and reduced bandwidth, while prosody is better preserved.},\n  keywords = {audio coding;data compression;hidden Markov models;multimedia communication;speech coding;speech synthesis;vocoders;MPEG audio compression effect;statistical parametric speech synthesis;HMM-based speech synthesis;hidden Markov model;compression rates;GlottHMM vocoders;STRAIGHT vocoders;compression ratio;spectral distortion;bandwidth reduction;Phase change materials;Transform coding;Vocoders;Abstracts;Speech;Statistical parametric speech synthesis;HMM;MPEG;MP3;GlottHMM;STRAIGHT},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569911241.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates the effect of MPEG audio compression on HMM-based speech synthesis using two state-of-the-art vocoders. Speech signals are first encoded with various compression rates and analyzed using the GlottHMM and STRAIGHT vocoders. Objective evaluation results show that the parameters of both vocoders gradually degrade with increasing compression rates, but with a clear increase in degradation with bit-rates of 32 kbit/s or less. Experiments with HMM-based synthesis with the two vocoders show that the degradation in quality is already perceptible with bit-rates of 32 kbit/s and both vocoders show similar trend in degradation with respect to compression ratio. The most perceptible artefacts induced by the compression are spectral distortion and reduced bandwidth, while prosody is better preserved.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Source-based error mitigation for speech transmissions over erasure channels.\n \n \n \n \n\n\n \n López-Oller, D.; Gomez, A. M.; and Pérez-Córdoba, J. L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1242-1246, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Source-basedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952428,\n  author = {D. López-Oller and A. M. Gomez and J. L. Pérez-Córdoba},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Source-based error mitigation for speech transmissions over erasure channels},\n  year = {2014},\n  pages = {1242-1246},\n  abstract = {In this paper we present a new mitigation technique for lost speech frames transmitted over loss-prone packet networks. It is based on an MMSE estimation from the last received frame, which provides replacements not only for the LPC coefficients (envelope) but also for the residual signal (excitation). Although the method is codec-independent, it requires a VQ-quantization of the LPC coefficients and the residual. Thus, in this paper we also propose a novel VQ quantization scheme for the residual signal based on the minimization of the squared synthesis error. The performance of our proposal is evaluated over the iLBC codec in terms of speech quality using PESQ and MUSHRA tests. This new mitigation technique achieves a noticeable improvement over the legacy codec under adverse channel conditions with no increase of bitrate and without any delay in the decoding process.},\n  keywords = {decoding;Internet telephony;least mean squares methods;linear predictive coding;speech coding;source-based error mitigation;speech transmissions;erasure channels;mitigation technique;speech frames;loss-prone packet networks;MMSE estimation;LPC coefficients;residual signal;VQ quantization scheme;squared synthesis error minimization;iLBC codec;speech quality;PESQ test;MUSHRA test;legacy codec;adverse channel conditions;decoding process;voice-over-IP technology;Speech;Packet loss;Databases;Codecs;Speech coding;Proposals;speech coding;frame erasure;packet loss concealment;iLBC;LPC residual mitigation;MMSE;speech source modeling},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569918769.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we present a new mitigation technique for lost speech frames transmitted over loss-prone packet networks. It is based on an MMSE estimation from the last received frame, which provides replacements not only for the LPC coefficients (envelope) but also for the residual signal (excitation). Although the method is codec-independent, it requires a VQ-quantization of the LPC coefficients and the residual. Thus, in this paper we also propose a novel VQ quantization scheme for the residual signal based on the minimization of the squared synthesis error. The performance of our proposal is evaluated over the iLBC codec in terms of speech quality using PESQ and MUSHRA tests. This new mitigation technique achieves a noticeable improvement over the legacy codec under adverse channel conditions with no increase of bitrate and without any delay in the decoding process.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modified sphere decoding algorithms and their applications to some sparse approximation problems.\n \n \n \n \n\n\n \n Dymarski, P.; and Romaniuk, R.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1247-1251, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ModifiedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952429,\n  author = {P. Dymarski and R. Romaniuk},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Modified sphere decoding algorithms and their applications to some sparse approximation problems},\n  year = {2014},\n  pages = {1247-1251},\n  abstract = {This work presents modified sphere decoding (MSD) algorithms for optimal solution of some sparse signal modeling problems. These problems include multi-pulse excitation signal calculations for multi-pulse excitation (MPE), algebraic code excited linear predictive (ACELP) and -pulse maximum likelihood quantization (MP-MLQ) speech coders. With the proposed MSD algorithms, the optimal solution of these problems can be obtained at substantially lower computational cost compared with full search algorithm. The MSD algorithms are compared with a series of suboptimal approaches in sparse approximation of correlated Gaussian signals and low delay speech coding tasks.},\n  keywords = {algebraic codes;decoding;maximum likelihood estimation;quantisation (signal);search problems;speech codecs;speech coding;modified sphere decoding algorithms;sparse approximation problems;MSD algorithms;sparse signal modeling problems;multipulse excitation signal calculations;algebraic code excited linear predictive;pulse maximum likelihood quantization;MP-MLQ speech coders;MSD algorithms;suboptimal approaches;sparse approximation;correlated Gaussian signals;low delay speech coding;Vectors;Decoding;Lattices;Approximation algorithms;Signal processing algorithms;Approximation methods;Speech coding;phere decoder;attice;parse approximation;peech coding;ELP;P-MLQ;CELP},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569919013.pdf},\n}\n\n
\n
\n\n\n
\n This work presents modified sphere decoding (MSD) algorithms for optimal solution of some sparse signal modeling problems. These problems include multi-pulse excitation signal calculations for multi-pulse excitation (MPE), algebraic code excited linear predictive (ACELP) and -pulse maximum likelihood quantization (MP-MLQ) speech coders. With the proposed MSD algorithms, the optimal solution of these problems can be obtained at substantially lower computational cost compared with full search algorithm. The MSD algorithms are compared with a series of suboptimal approaches in sparse approximation of correlated Gaussian signals and low delay speech coding tasks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Speech rate determination by vowel detection on the modulated energy envelope.\n \n \n \n \n\n\n \n Dekens, T.; Martens, H.; Van Nuffelen, G.; De Bodt, M.; and Verhelst, W.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1252-1256, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SpeechPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952430,\n  author = {T. Dekens and H. Martens and G. {Van Nuffelen} and M. {De Bodt} and W. Verhelst},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Speech rate determination by vowel detection on the modulated energy envelope},\n  year = {2014},\n  pages = {1252-1256},\n  abstract = {In this paper we propose a new algorithm to detect vowels in a speech utterance and infer the rate at which speech was produced. To achieve this we determine a smooth trajectory that corresponds to a high frequency energy envelope, modulated by the low frequency energy content. Peak picking performed on this trajectory gives an estimate of the number of vowels in the utterance. To dispose of falsely detected vowels, a peak pruning post-processing step is incorporated. Experimental results show that the proposed algorithm is more accurate than the two speech rate determination algorithms on which it was inspired.},\n  keywords = {speech processing;speech recognition;peak pruning post-processing step;low frequency energy content;high frequency energy envelope;speech utterance;modulated energy envelope;vowel detection;speech rate determination;Speech;Trajectory;Correlation;Smoothing methods;Frequency modulation;Speech recognition;Estimation;Speech rate determination;vowel detection},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922087.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we propose a new algorithm to detect vowels in a speech utterance and infer the rate at which speech was produced. To achieve this we determine a smooth trajectory that corresponds to a high frequency energy envelope, modulated by the low frequency energy content. Peak picking performed on this trajectory gives an estimate of the number of vowels in the utterance. To dispose of falsely detected vowels, a peak pruning post-processing step is incorporated. Experimental results show that the proposed algorithm is more accurate than the two speech rate determination algorithms on which it was inspired.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Watermarking of speech signals based on formant enhancement.\n \n \n \n \n\n\n \n Wang, S.; and Unoki, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1257-1261, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"WatermarkingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952431,\n  author = {S. Wang and M. Unoki},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Watermarking of speech signals based on formant enhancement},\n  year = {2014},\n  pages = {1257-1261},\n  abstract = {This paper proposes a speech watermarking method based on formant enhancement. The line spectral frequencies (LSFs) which can stably represent the formants were firstly derived from the speech signal by linear prediction (LP) analysis. A pair of LSFs were then symmetrically controlled to enhance formants for watermark embedding. Two kinds of objective experiments regarding inaudibility and robustness were carried out to evaluate the proposed method in comparison with other typical methods. The results indicated that the proposed method could not only satisfy inaudibility but also provide good robustness against different speech codecs and general processing, while the other methods encountered problems.},\n  keywords = {audio watermarking;linear predictive coding;spectral analysis;speech coding;speech codecs;robustness;inaudibility;linear prediction analysis;line spectral frequencies;speech watermarking method;formant enhancement;speech signals;Watermarking;Robustness;Speech;Bandwidth;Decision support systems;Speech codecs;Speech enhancement;Speech watermarking;formant enhancement;line spectral frequencies;inaudibility;robustness},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924405.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a speech watermarking method based on formant enhancement. The line spectral frequencies (LSFs) which can stably represent the formants were firstly derived from the speech signal by linear prediction (LP) analysis. A pair of LSFs were then symmetrically controlled to enhance formants for watermark embedding. Two kinds of objective experiments regarding inaudibility and robustness were carried out to evaluate the proposed method in comparison with other typical methods. The results indicated that the proposed method could not only satisfy inaudibility but also provide good robustness against different speech codecs and general processing, while the other methods encountered problems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A lowdistortion noise canceller with a novel stepsize control and conditional cancellation.\n \n \n \n\n\n \n Sugiyama, A.; and Miyaharay, R.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1262-1266, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952432,\n  author = {A. Sugiyama and R. Miyaharay},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A lowdistortion noise canceller with a novel stepsize control and conditional cancellation},\n  year = {2014},\n  pages = {1262-1266},\n  abstract = {This paper proposes a low-distortion noise canceller with a novel stepsize control and conditional cancellation. The coefficient adaptation stepsize is controlled by an estimated signal-to-noise ratio (SNR) at the primary input and a relative coefficient magnitude normalized by the reference power. The SNR is estimated based on the noise replica and the output, and converted to a stepsize by an exponential function. This stepsize provides robustness to interference by the desired speech. Conditional cancellation guarantees that the noisy signal power is reduced by noise-replica subtraction. Comparison of the proposed noise canceller with five popular state-of-the-art commercial smartphones demonstrates good enhanced-signal quality with as much as 0.6 PESQ improvement.},\n  keywords = {adaptive filters;interference suppression;smart phones;speech enhancement;PESQ improvement;signal quality;smartphones;noise-replica subtraction;noisy signal power;exponential function;reference power;coefficient magnitude;SNR;signal-to-noise ratio;coefficient adaptation stepsize;conditional cancellation;stepsize control;low-distortion noise canceller;Speech;Noise cancellation;Signal to noise ratio;Microphones;Speech enhancement;Smart phones;Two microphone;Dual microphone;Low distortion;Noise canceller;Stepsize control},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n This paper proposes a low-distortion noise canceller with a novel stepsize control and conditional cancellation. The coefficient adaptation stepsize is controlled by an estimated signal-to-noise ratio (SNR) at the primary input and a relative coefficient magnitude normalized by the reference power. The SNR is estimated based on the noise replica and the output, and converted to a stepsize by an exponential function. This stepsize provides robustness to interference by the desired speech. Conditional cancellation guarantees that the noisy signal power is reduced by noise-replica subtraction. Comparison of the proposed noise canceller with five popular state-of-the-art commercial smartphones demonstrates good enhanced-signal quality with as much as 0.6 PESQ improvement.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n LBP based recursive averaging for babble noise reduction applied to automatic speech recognition.\n \n \n \n \n\n\n \n Zhu, Q.; and Soraghan, J. J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1267-1271, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"LBPPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952453,\n  author = {Q. Zhu and J. J. Soraghan},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {LBP based recursive averaging for babble noise reduction applied to automatic speech recognition},\n  year = {2014},\n  pages = {1267-1271},\n  abstract = {Improved automatic speech recognition (ASR) in babble noise conditions continues to pose major challenges. In this paper, we propose a new local binary pattern (LBP) based speech presence indicator (SPI) to distinguish speech and non-speech components. Babble noise is subsequently estimated using recursive averaging. In the speech enhancement system optimally-modified log-spectral amplitude (OMLSA) uses the estimated noise spectrum obtained from the LBP based recursive averaging (LRA). The performance of the LRA speech enhancement system is compared to the conventional improved minima controlled recursive averaging (IMCRA). Segmental SNR improvements and perceptual evaluations of speech quality (PESQ) scores show that LRA offers superior babble noise reduction compared to the IMCRA system. Hidden Markov model (HMM) based word recognition results show a corresponding improvement.},\n  keywords = {hidden Markov models;speech enhancement;speech recognition;local binary pattern;babble noise reduction;automatic speech recognition;speech presence indicator;speech enhancement system;optimally-modified log-spectral amplitude;improved minima controlled recursive averaging;speech quality perceptual evaluations;hidden Markov model;word recognition;Speech;Speech enhancement;Hidden Markov models;Speech recognition;Signal to noise ratio;Noise measurement;1-D LBP;noise estimation;noise reduction;speech recognition;HMM},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925213.pdf},\n}\n\n
\n
\n\n\n
\n Improved automatic speech recognition (ASR) in babble noise conditions continues to pose major challenges. In this paper, we propose a new local binary pattern (LBP) based speech presence indicator (SPI) to distinguish speech and non-speech components. Babble noise is subsequently estimated using recursive averaging. In the speech enhancement system optimally-modified log-spectral amplitude (OMLSA) uses the estimated noise spectrum obtained from the LBP based recursive averaging (LRA). The performance of the LRA speech enhancement system is compared to the conventional improved minima controlled recursive averaging (IMCRA). Segmental SNR improvements and perceptual evaluations of speech quality (PESQ) scores show that LRA offers superior babble noise reduction compared to the IMCRA system. Hidden Markov model (HMM) based word recognition results show a corresponding improvement.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A speaker rediarization scheme for improving diarization in large two-speaker telephone datasets.\n \n \n \n \n\n\n \n Ghaemmaghami, H.; Dean, D.; and Sridharan, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1272-1276, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952454,\n  author = {H. Ghaemmaghami and D. Dean and S. Sridharan},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A speaker rediarization scheme for improving diarization in large two-speaker telephone datasets},\n  year = {2014},\n  pages = {1272-1276},\n  abstract = {In this paper we propose a novel scheme for carrying out speaker diarization in an iterative manner. We aim to show that the information obtained through the first pass of speaker diarization can be reused to refine and improve the original diarization results. We call this technique speaker rediarization and demonstrate the practical application of our rediarization algorithm using a large archive of two-speaker telephone conversation recordings. We use the NIST 2008 SRE summed telephone corpora for evaluating our speaker rediarization system. This corpus contains recurring speaker identities across independent recording sessions that need to be linked across the entire corpus. We show that our speaker rediarization scheme can take advantage of inter-session speaker information, linked in the initial diarization pass, to achieve a 30% relative improvement over the original diarization error rate (DER) after only two iterations of rediarization.},\n  keywords = {error statistics;iterative methods;speaker recognition;speaker rediarization scheme;two-speaker telephone datasets;two-speaker telephone conversation recordings;NIST 2008 SRE summed telephone corpora;intersession speaker information;diarization error rate;DER;Joining processes;Density estimation robust algorithm;Adaptation models;NIST;Measurement;Computational modeling;Hidden Markov models;Speaker rediarization;diarization;speaker linking;complete-linkage clustering;cross-likelihood ratio},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926145.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we propose a novel scheme for carrying out speaker diarization in an iterative manner. We aim to show that the information obtained through the first pass of speaker diarization can be reused to refine and improve the original diarization results. We call this technique speaker rediarization and demonstrate the practical application of our rediarization algorithm using a large archive of two-speaker telephone conversation recordings. We use the NIST 2008 SRE summed telephone corpora for evaluating our speaker rediarization system. This corpus contains recurring speaker identities across independent recording sessions that need to be linked across the entire corpus. We show that our speaker rediarization scheme can take advantage of inter-session speaker information, linked in the initial diarization pass, to achieve a 30% relative improvement over the original diarization error rate (DER) after only two iterations of rediarization.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Causality-constrained multiple shift sequential matrix diagonalisation for parahermitian matrices.\n \n \n \n \n\n\n \n Corr, J.; Thompson, K.; Weiss, S.; McWhirter, J. G.; and Proudler, I. K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1277-1281, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Causality-constrainedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952455,\n  author = {J. Corr and K. Thompson and S. Weiss and J. G. McWhirter and I. K. Proudler},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Causality-constrained multiple shift sequential matrix diagonalisation for parahermitian matrices},\n  year = {2014},\n  pages = {1277-1281},\n  abstract = {This paper introduces a causality constrained sequential matrix diagonalisation (SMD) algorithm, which generates a causal paraunitary transformation that approximately diagonalises and spectrally majorises a parahermitian matrix, and can be used to determine a polynomial eigenvalue decomposition. This algorithm builds on a multiple shift technique which speeds up diagonalisation per iteration step based on a particular search space, which is constrained to permit a maximum number of causal time shifts. The results presented in this paper show the performance in comparison to existing algorithms, in particular an unconstrained multiple shift SMD algorithm, from which our proposed method derives.},\n  keywords = {eigenvalues and eigenfunctions;iterative methods;matrix decomposition;causality-constrained multiple shift sequential matrix diagonalisation;parahermitian matrices;causal paraunitary transformation;polynomial eigenvalue decomposition;multiple shift technique;search space;causal time shifts;unconstrained multiple shift SMD algorithm;diagonalisation per iteration step;Jacobian matrices;Approximation algorithms;Polynomials;Matrix decomposition;Delays;Eigenvalues and eigenfunctions;Broadband communication},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569927031.pdf},\n}\n\n
\n
\n\n\n
\n This paper introduces a causality constrained sequential matrix diagonalisation (SMD) algorithm, which generates a causal paraunitary transformation that approximately diagonalises and spectrally majorises a parahermitian matrix, and can be used to determine a polynomial eigenvalue decomposition. This algorithm builds on a multiple shift technique which speeds up diagonalisation per iteration step based on a particular search space, which is constrained to permit a maximum number of causal time shifts. The results presented in this paper show the performance in comparison to existing algorithms, in particular an unconstrained multiple shift SMD algorithm, from which our proposed method derives.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimum discrete phase-only transmit beamforming with antenna selection.\n \n \n \n \n\n\n \n Demir, Ö. T.; and Tuncer, T. E.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1282-1286, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OptimumPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952456,\n  author = {Ö. T. Demir and T. E. Tuncer},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Optimum discrete phase-only transmit beamforming with antenna selection},\n  year = {2014},\n  pages = {1282-1286},\n  abstract = {Phase-only beamforming is used in radar and communication systems due to its certain advantages. Antenna selection becomes an important problem as the number of antennas becomes larger than the number of transmit-receive chains. In this paper, discrete single group multicast transmit phase-only beamformer design with antenna subset selection is considered. The problem is converted into linear form and solved efficiently by using mixed integer linear programming to find the optimum subset of antennas and beamformer coefficients. Several simulations are done and it is shown that the proposed approach is an effective and efficient method of subarray transmit beamformer design.},\n  keywords = {antenna arrays;array signal processing;integer programming;linear programming;optimum discrete phase only transmit beamforming;communication systems;radar systems;transmit receive chains;antenna subset selection;mixed integer linear programming;beamformer coefficients;antenna coefficients;subarray transmit beamformer design;antenna arrays;Arrays;Vectors;Transmitting antennas;Antenna arrays;Array signal processing;Optimization;Transmit beamformer;discrete beamformer;mixed integer linear programming;antenna selection},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569911017.pdf},\n}\n\n
\n
\n\n\n
\n Phase-only beamforming is used in radar and communication systems due to its certain advantages. Antenna selection becomes an important problem as the number of antennas becomes larger than the number of transmit-receive chains. In this paper, discrete single group multicast transmit phase-only beamformer design with antenna subset selection is considered. The problem is converted into linear form and solved efficiently by using mixed integer linear programming to find the optimum subset of antennas and beamformer coefficients. Several simulations are done and it is shown that the proposed approach is an effective and efficient method of subarray transmit beamformer design.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive re-weighting homotopy for sparse beamforming.\n \n \n \n \n\n\n \n Neto, F. G. A.; Nascimento, V. H.; Zakharov, Y. V.; and de Lamare , R. C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1287-1291, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952457,\n  author = {F. G. A. Neto and V. H. Nascimento and Y. V. Zakharov and R. C. {de Lamare}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive re-weighting homotopy for sparse beamforming},\n  year = {2014},\n  pages = {1287-1291},\n  abstract = {In this paper, a complex adaptive re-weighting algorithm based on the homotopy technique is developed and used for beamforming. A multi-candidate scheme is also proposed and incorporated into the adaptive re-weighting homotopy algorithm to choose the regularization factor and improve the signal-to-interference plus noise (SINR) performance. The proposed algorithm is used to minimize the degradation caused by sparsity in arrays with faulty sensors, or when the required degrees of freedom to suppress interference is significantly less than the number of sensors. Simulations illustrate the algorithm's performance.},\n  keywords = {array signal processing;interference suppression;sensors;sparse beamforming;complex adaptive reweighting algorithm;multicandidate scheme;adaptive reweighting homotopy algorithm;regularization factor;signal-to-interference plus noise performance;SINR performance;faulty sensors;degree of freedom;interference suppression;Interference;Signal to noise ratio;Sensor arrays;Array signal processing;Vectors;Signal processing algorithms;Multi-candidate re-weighting homotopy;beamforming;adaptive algorithms},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924589.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a complex adaptive re-weighting algorithm based on the homotopy technique is developed and used for beamforming. A multi-candidate scheme is also proposed and incorporated into the adaptive re-weighting homotopy algorithm to choose the regularization factor and improve the signal-to-interference plus noise (SINR) performance. The proposed algorithm is used to minimize the degradation caused by sparsity in arrays with faulty sensors, or when the required degrees of freedom to suppress interference is significantly less than the number of sensors. Simulations illustrate the algorithm's performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed GEVD-based signal subspace estimation in a fully-connected wireless sensor network.\n \n \n \n \n\n\n \n Hassani, A.; Bertrand, A.; and Moonen, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1292-1296, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DistributedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952458,\n  author = {A. Hassani and A. Bertrand and M. Moonen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Distributed GEVD-based signal subspace estimation in a fully-connected wireless sensor network},\n  year = {2014},\n  pages = {1292-1296},\n  abstract = {In this paper, we present a distributed algorithm for network-wide signal subspace estimation in a fully-connected wireless sensor network with multi-sensor nodes. We consider scenarios where the noise field is spatially correlated between the nodes. Therefore, rather than an eigenvalue decomposition (EVD-) based approach, we apply a generalized EVD (GEVD-) based approach which allows to directly incorporate the (estimated) noise covariance. Furthermore, the GEVD is also immune to unknown per-channel scalings. We first use a distributed algorithm to estimate the principal generalized eigenvectors (GEVCs) of a pair of network-wide sensor signal covariance matrices, without explicitly constructing these matrices, as this would inherently require data centralization. We then apply a transformation at each node to extract the actual signal subspace estimate from the principal GEVCs. The resulting distributed algorithm can reduce the per-node communication and computational cost. We demonstrate the effectiveness of the algorithm by means of numerical simulations.},\n  keywords = {covariance matrices;distributed algorithms;eigenvalues and eigenfunctions;signal processing;wireless sensor networks;fully-connected wireless sensor network;network-wide signal subspace estimation;generalized eigenvalue decomposition;noise covariance;principal generalized eigenvectors;GEVC;sensor signal covariance matrices;data centralization;GEVD estimation;distributed algorithm;numerical simulations;Estimation;Covariance matrices;Noise;Wireless sensor networks;Distributed algorithms;Eigenvalues and eigenfunctions;Wireless sensor network (WSN);distributed estimation;signal subspace estimation;generalized eigenvalue decomposition (GEVD)},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921765.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present a distributed algorithm for network-wide signal subspace estimation in a fully-connected wireless sensor network with multi-sensor nodes. We consider scenarios where the noise field is spatially correlated between the nodes. Therefore, rather than an eigenvalue decomposition (EVD-) based approach, we apply a generalized EVD (GEVD-) based approach which allows to directly incorporate the (estimated) noise covariance. Furthermore, the GEVD is also immune to unknown per-channel scalings. We first use a distributed algorithm to estimate the principal generalized eigenvectors (GEVCs) of a pair of network-wide sensor signal covariance matrices, without explicitly constructing these matrices, as this would inherently require data centralization. We then apply a transformation at each node to extract the actual signal subspace estimate from the principal GEVCs. The resulting distributed algorithm can reduce the per-node communication and computational cost. We demonstrate the effectiveness of the algorithm by means of numerical simulations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Design of piecewise linear polyphase sequences with good correlation properties.\n \n \n \n \n\n\n \n Soltanalian, M.; Stoica, P.; Naghsh, M. M.; and De Maio, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1297-1301, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DesignPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952459,\n  author = {M. Soltanalian and P. Stoica and M. M. Naghsh and A. {De Maio}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Design of piecewise linear polyphase sequences with good correlation properties},\n  year = {2014},\n  pages = {1297-1301},\n  abstract = {In this paper, we devise a computational approach for designing polyphase sequences with two key properties; (i) a phase argument which is piecewise linear, and (ii) an impulse-like autocorrelation. The proposed approach relies on fast Fourier transform (FFT) operations and thus can be used efficiently to design sequences with a large length or alphabet size. Moreover, using the suggested method, one can construct many new such polyphase sequences which were not known and/or could not be constructed by the previous formulations in the literature. Several numerical examples are provided to show the performance of the proposed design framework in different scenarios.},\n  keywords = {codes;fast Fourier transforms;piecewise linear techniques;radar;piecewise linear polyphase sequences;good correlation properties;impulse-like autocorrelation;fast Fourier transform;waveform design;radar codes;peak-to-average power ration;Correlation;Vectors;Optimization;Peak to average power ratio;Radar;Educational institutions;Indexes;Autocorrelation;peak-to-average-power ratio (PAR);polyphase sequences;radar codes;waveform design},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924097.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we devise a computational approach for designing polyphase sequences with two key properties; (i) a phase argument which is piecewise linear, and (ii) an impulse-like autocorrelation. The proposed approach relies on fast Fourier transform (FFT) operations and thus can be used efficiently to design sequences with a large length or alphabet size. Moreover, using the suggested method, one can construct many new such polyphase sequences which were not known and/or could not be constructed by the previous formulations in the literature. Several numerical examples are provided to show the performance of the proposed design framework in different scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A probabilistic interpretation of geometric active contour segmentation.\n \n \n \n \n\n\n \n De Vylder, J.; Van Haerenborgh, D.; Aelterman, J.; and Philips, W.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1302-1306, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952460,\n  author = {J. {De Vylder} and D. {Van Haerenborgh} and J. Aelterman and W. Philips},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A probabilistic interpretation of geometric active contour segmentation},\n  year = {2014},\n  pages = {1302-1306},\n  abstract = {Active contours or snakes are widely used for segmentation and tracking. These techniques require the minimization of an energy function, which is typically a linear combination of a data-fit term and regularization terms. This energy function can be tailored to the intrinsic object and image features. This can be done by either modifying the actual terms or by changing the weighting parameters of the terms. There is, however, no sure way to set these terms and weighting parameters optimally for a given application. Although heuristic techniques exist for parameter estimation, often trial and error is used. In this paper, we propose a probabilistic interpretation to segmentation. This approach results in a generalization of state of the art active contour segmentation. In the proposed framework all parameters have a statistical interpretation, thus avoiding ad hoc parameter settings.},\n  keywords = {image segmentation;inverse problems;minimisation;probability;probabilistic interpretation;parameter estimation;image features;intrinsic object;regularization terms;data fit term;energy function;geometric active contour segmentation;Active contours;Image segmentation;Noise;Shape;Probabilistic logic;Optimization;Computational modeling;Active contours;segmentation;convex optimization;statistical estimator},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923945.pdf},\n}\n\n
\n
\n\n\n
\n Active contours or snakes are widely used for segmentation and tracking. These techniques require the minimization of an energy function, which is typically a linear combination of a data-fit term and regularization terms. This energy function can be tailored to the intrinsic object and image features. This can be done by either modifying the actual terms or by changing the weighting parameters of the terms. There is, however, no sure way to set these terms and weighting parameters optimally for a given application. Although heuristic techniques exist for parameter estimation, often trial and error is used. In this paper, we propose a probabilistic interpretation to segmentation. This approach results in a generalization of state of the art active contour segmentation. In the proposed framework all parameters have a statistical interpretation, thus avoiding ad hoc parameter settings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Retina enhanced bag of words descriptors for video classification.\n \n \n \n \n\n\n \n Strat, S. T.; Benoit, A.; and Lambert, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1307-1311, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RetinaPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952461,\n  author = {S. T. Strat and A. Benoit and P. Lambert},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Retina enhanced bag of words descriptors for video classification},\n  year = {2014},\n  pages = {1307-1311},\n  abstract = {This paper addresses the task of detecting diverse semantic concepts in videos. Within this context, the Bag Of Visual Words (BoW) model, inherited from sampled video keyframes analysis, is among the most popular methods. However, in the case of image sequences, this model faces new difficulties such as the added motion information, the extra computational cost and the increased variability of content and concepts to handle. Considering this spatio-temporal context, we propose to extend the BoW model by introducing video preprocessing strategies with the help of a retina model, before extracting BoW descriptors. This preprocessing increases the robustness of local features to disturbances such as noise and lighting variations. Additionally, the retina model is used to detect potentially salient areas and to construct spatio-temporal descriptors. We experiment with three state of the art local features, SIFT, SURF and FREAK, and we evaluate our results on the TRECVid 2012 Semantic Indexing (SIN) challenge.},\n  keywords = {image classification;image motion analysis;image sequences;video signal processing;retina enhanced bag of words descriptors;bag of visual words model;diverse semantic concept detection;video keyframes analysis;image sequences;motion information;computational cost;spatiotemporal context;video classification;video preprocessing strategies;BoW descriptors;local features;noise variations;lighting variations;SIFT;SURF;FREAK;TRECVid 2012 semantic indexing;SIN challenge;Retina;Feature extraction;Semantics;Visualization;Histograms;Streaming media;Computational modeling;Video;classification;retina;saliency;Bag of Words},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925953.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the task of detecting diverse semantic concepts in videos. Within this context, the Bag Of Visual Words (BoW) model, inherited from sampled video keyframes analysis, is among the most popular methods. However, in the case of image sequences, this model faces new difficulties such as the added motion information, the extra computational cost and the increased variability of content and concepts to handle. Considering this spatio-temporal context, we propose to extend the BoW model by introducing video preprocessing strategies with the help of a retina model, before extracting BoW descriptors. This preprocessing increases the robustness of local features to disturbances such as noise and lighting variations. Additionally, the retina model is used to detect potentially salient areas and to construct spatio-temporal descriptors. We experiment with three state of the art local features, SIFT, SURF and FREAK, and we evaluate our results on the TRECVid 2012 Semantic Indexing (SIN) challenge.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rotation-invariant object detection using Complex Matched Filters and second order vector fields.\n \n \n \n \n\n\n \n Pudzs, M.; and Greitans, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1312-1316, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Rotation-invariantPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952462,\n  author = {M. Pudzs and M. Greitans},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Rotation-invariant object detection using Complex Matched Filters and second order vector fields},\n  year = {2014},\n  pages = {1312-1316},\n  abstract = {In this paper we introduce two concepts: second order vector fields that describe line-like objects in images and rotation-invariant Complex Matched Filter kernels that can be used to detect object with almost any complexity. We present the theoretical grounds for kernel derivation, object matching using sets of subresponses, object's rotation angle and active area determination. The work of the proposed algorithms is demonstrated on images of an occluded and rotated object.},\n  keywords = {matched filters;object detection;rotation-invariant object detection;complex matched filters;second order vector fields;line-like objects;rotation-invariant complex matched filter;kernel derivation;object matching;object rotation angle;Kernel;Vectors;Object detection;Convolution;Shape;Matched filters;Estimation;image processing;matched filters;vector fields;angular invariance;object detection},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925147.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we introduce two concepts: second order vector fields that describe line-like objects in images and rotation-invariant Complex Matched Filter kernels that can be used to detect object with almost any complexity. We present the theoretical grounds for kernel derivation, object matching using sets of subresponses, object's rotation angle and active area determination. The work of the proposed algorithms is demonstrated on images of an occluded and rotated object.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Human action recognition in stereoscopic videos based on bag of features and disparity pyramids.\n \n \n \n \n\n\n \n Iosifidis, A.; Tefas, A.; Nikolaidis, N.; and Pitas, I.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1317-1321, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"HumanPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952463,\n  author = {A. Iosifidis and A. Tefas and N. Nikolaidis and I. Pitas},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Human action recognition in stereoscopic videos based on bag of features and disparity pyramids},\n  year = {2014},\n  pages = {1317-1321},\n  abstract = {In this paper, we propose a method for human action recognition in unconstrained environments based on stereoscopic videos. We describe a video representation scheme that exploits the enriched visual and disparity information that is available for such data. Each stereoscopic video is represented by multiple vectors, evaluated on video locations corresponding to different disparity zones. By using these vectors, multiple action descriptions can be determined that either correspond to specific disparity zones, or combine information appearing in different disparity zones in the classification phase. Experimental results denote that the proposed approach enhances action classification performance, when compared to the standard approach, and achieves state-of-the-art performance on the Hollywood 3D database designed for the recognition of complex actions in unconstrained environments.},\n  keywords = {image classification;image motion analysis;image representation;stereo image processing;video signal processing;human action recognition;stereoscopic videos;bag of features;disparity pyramids;unconstrained environments;video representation scheme;disparity information;visual information;multiple vectors;video locations;disparity zones;multiple action descriptions;classification phase;action classification performance enhancement;Hollywood 3D database;Videos;Stereo image processing;Databases;Three-dimensional displays;Cameras;Vectors;Computer vision;Human Action Recognition;Stereoscopic Videos;Disparity Pyramids;Bag of Features},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917287.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a method for human action recognition in unconstrained environments based on stereoscopic videos. We describe a video representation scheme that exploits the enriched visual and disparity information that is available for such data. Each stereoscopic video is represented by multiple vectors, evaluated on video locations corresponding to different disparity zones. By using these vectors, multiple action descriptions can be determined that either correspond to specific disparity zones, or combine information appearing in different disparity zones in the classification phase. Experimental results denote that the proposed approach enhances action classification performance, when compared to the standard approach, and achieves state-of-the-art performance on the Hollywood 3D database designed for the recognition of complex actions in unconstrained environments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Object tracking extensions for accurate recovery of rainfall maps using microwave sensor network.\n \n \n \n \n\n\n \n Liberman, Y.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1322-1326, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ObjectPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952464,\n  author = {Y. Liberman},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Object tracking extensions for accurate recovery of rainfall maps using microwave sensor network},\n  year = {2014},\n  pages = {1322-1326},\n  abstract = {Recently, diverse methods have been proposed for faithful reconstruction of instantaneous rainfall maps by using received signal level (RSL) measurements from commercial microwave network (CMN), especially in dense networks. The main lacking of these methods is that the temporal properties of the rain field had not been considered, hence their accuracy might be limited. This paper presents a novel method for accurate spatio-temporal reconstruction of rainfall maps, derived from CMN, by using an extension to object tracking algorithms. An efficient coherency algorithm is used, which relates between sequential instantaneous rainfall maps. Then by using Kalman filter, the observed rain maps are predicted and corrected. When comparing the estimates to actual rain measurements, the performance improvement of the rainfall mapping is manifested, even when dealing with a rather sparse network, and low temporal resolution of the measurements. The method proposed here is not restricted to the application of accurate rainfall mapping.},\n  keywords = {atmospheric techniques;geophysical signal processing;Kalman filters;microwave detectors;object tracking;rain;signal reconstruction;Kalman filter;low temporal resolution;sparse network;rain measurements;sequential instantaneous rainfall maps;coherency algorithm;object tracking algorithms;spatio-temporal reconstruction;rain field temporal property;CMN;RSL;received signal level measurements;instantaneous rainfall map reconstruction;rainfall map recovery;object tracking extensions;Rain;Radar;Kalman filters;Object tracking;Microwave measurement;Mathematical model;Microwave theory and techniques;Microwave Network;Object Tracking;Estimation;Reconstruction;Rainfall Mapping},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924231.pdf},\n}\n\n
\n
\n\n\n
\n Recently, diverse methods have been proposed for faithful reconstruction of instantaneous rainfall maps by using received signal level (RSL) measurements from commercial microwave network (CMN), especially in dense networks. The main lacking of these methods is that the temporal properties of the rain field had not been considered, hence their accuracy might be limited. This paper presents a novel method for accurate spatio-temporal reconstruction of rainfall maps, derived from CMN, by using an extension to object tracking algorithms. An efficient coherency algorithm is used, which relates between sequential instantaneous rainfall maps. Then by using Kalman filter, the observed rain maps are predicted and corrected. When comparing the estimates to actual rain measurements, the performance improvement of the rainfall mapping is manifested, even when dealing with a rather sparse network, and low temporal resolution of the measurements. The method proposed here is not restricted to the application of accurate rainfall mapping.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A reversible jump MCMC algorithm for Particle Size inversion in Multiangle Dynamic Light Scattering.\n \n \n \n \n\n\n \n Boualem, A.; Jabloun, M.; Ravier, P.; Naiim, M.; and Jalocha, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1327-1331, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952465,\n  author = {A. Boualem and M. Jabloun and P. Ravier and M. Naiim and A. Jalocha},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A reversible jump MCMC algorithm for Particle Size inversion in Multiangle Dynamic Light Scattering},\n  year = {2014},\n  pages = {1327-1331},\n  abstract = {The inverse problem of estimating the Particle Size Distribution (PSD) from Multiangle Dynamic Light Scattering measurements (MDLS) is considered using a Bayesian inference approach. We propose to model the multimodal PSD as a normal mixture with an unknown number of components (modes or peaks). In order to achieve the estimation of these variable dimension parameters, a Bayesian inference approach is used and solved by the Reversible Jump Markov ChainMonte Carlo sampler (RJMCMC). The efficiency and robustness of the method proposed are demonstrated using simulated and experimental data. Estimated PSDs are close to the original distributions for synthetic data. Moreover an improvement of the resolution is noticed compared to the Clementi method [1].},\n  keywords = {Bayes methods;inverse problems;light scattering;Markov processes;Monte Carlo methods;particle size;reversible jump MCMC algorithm;particle size inversion;inverse problem;particle size distribution;PSD;multiangle dynamic light scattering measurements;MDLS;Bayesian inference;normal mixture;reversible jump Markov chain Monte Carlo sampler;RJMCMC;Bayes methods;Light scattering;Estimation;Noise;Monte Carlo methods;Standards;Particle Size Distribution;Multiangle Dynamic Light Scattering;Inverse Problem;Bayesian Inference;MCMC;Reversible Jump},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926795.pdf},\n}\n\n
\n
\n\n\n
\n The inverse problem of estimating the Particle Size Distribution (PSD) from Multiangle Dynamic Light Scattering measurements (MDLS) is considered using a Bayesian inference approach. We propose to model the multimodal PSD as a normal mixture with an unknown number of components (modes or peaks). In order to achieve the estimation of these variable dimension parameters, a Bayesian inference approach is used and solved by the Reversible Jump Markov ChainMonte Carlo sampler (RJMCMC). The efficiency and robustness of the method proposed are demonstrated using simulated and experimental data. Estimated PSDs are close to the original distributions for synthetic data. Moreover an improvement of the resolution is noticed compared to the Clementi method [1].\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Majorize-Minimize adapted metropolis-hastings algorithm. Application to multichannel image recovery.\n \n \n \n \n\n\n \n Marnissi, Y.; Benazza-Benyahia, A.; Chouzenoux, E.; and Pesquet, J. -.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1332-1336, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Majorize-MinimizePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952466,\n  author = {Y. Marnissi and A. Benazza-Benyahia and E. Chouzenoux and J. -. Pesquet},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Majorize-Minimize adapted metropolis-hastings algorithm. Application to multichannel image recovery},\n  year = {2014},\n  pages = {1332-1336},\n  abstract = {One challenging task in MCMC methods is the choice of the proposal density. It should ideally provide an accurate approximation of the target density with a low computational cost. In this paper, we are interested in Langevin diffusion where the proposal accounts for a directional component. We propose a novel method for tuning the related drift term. This term is preconditioned by an adaptive matrix based on a Majorize-Minimize strategy. This new procedure is shown to exhibit a good performance in a multispectral image restoration example.},\n  keywords = {image restoration;Markov processes;matrix algebra;Monte Carlo methods;majorize-minimize adapted metropolis-hastings algorithm;multichannel image recovery;MCMC method;proposal density;target density;computational cost;Langevin diffusion;directional component;adaptive matrix;multispectral image restoration example;Markov chain Monte Carlo approach;Markov processes;Vectors;Signal to noise ratio;Proposals;Image restoration;Covariance matrices;Monte Carlo methods;MCMC methods;Langevin diffusion;Majorize-Minimize;MMSE;multichannel image restoration},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925569.pdf},\n}\n\n
\n
\n\n\n
\n One challenging task in MCMC methods is the choice of the proposal density. It should ideally provide an accurate approximation of the target density with a low computational cost. In this paper, we are interested in Langevin diffusion where the proposal accounts for a directional component. We propose a novel method for tuning the related drift term. This term is preconditioned by an adaptive matrix based on a Majorize-Minimize strategy. This new procedure is shown to exhibit a good performance in a multispectral image restoration example.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rank-based multiple change-point detection in multivariate time series.\n \n \n \n \n\n\n \n Harlé, F.; Chatelain, F.; Gouy-Pailler, C.; and Achard, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1337-1341, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Rank-basedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952467,\n  author = {F. Harlé and F. Chatelain and C. Gouy-Pailler and S. Achard},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Rank-based multiple change-point detection in multivariate time series},\n  year = {2014},\n  pages = {1337-1341},\n  abstract = {In this paper, we propose a Bayesian approach for multivariate time series segmentation. A robust non-parametric test, based on rank statistics, is derived in a Bayesian framework to yield robust distribution-independent segmentations of piecewise constant multivariate time series for which mutual dependencies are unknown. By modelling rank-test p-values, a pseudo-likelihood is proposed to favour change-points detection for significant p-values. A vague prior is chosen for dependency structure between time series, and a MCMC method is applied to the resulting posterior distribution. The Gibbs sampling strategy makes the method computationally efficient. The algorithm is illustrated on simulated and real signals in two practical settings. It is demonstrated that change-points are robustly detected and localized, through implicit dependency structure learning or explicit structural prior introduction.},\n  keywords = {Bayes methods;signal detection;signal sampling;time series;rank-based multiple change-point detection;Bayesian approach;robust distribution-independent segmentation;piecewise constant multivariate time series segmentation;rank-test p-values;pseudo-likelihood;MCMC method;posterior distribution;Gibbs sampling strategy;implicit dependency structure learning;explicit structural prior introduction;Abstracts;Monitoring;Robustness;Joints;Rank statistics;joint segmentation;dependency structure learning;Bayesian inference;MCMC methods;Gibbs sampling},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923491.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a Bayesian approach for multivariate time series segmentation. A robust non-parametric test, based on rank statistics, is derived in a Bayesian framework to yield robust distribution-independent segmentations of piecewise constant multivariate time series for which mutual dependencies are unknown. By modelling rank-test p-values, a pseudo-likelihood is proposed to favour change-points detection for significant p-values. A vague prior is chosen for dependency structure between time series, and a MCMC method is applied to the resulting posterior distribution. The Gibbs sampling strategy makes the method computationally efficient. The algorithm is illustrated on simulated and real signals in two practical settings. It is demonstrated that change-points are robustly detected and localized, through implicit dependency structure learning or explicit structural prior introduction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Group-sparse adaptive variational Bayes estimation.\n \n \n \n \n\n\n \n Themelis, K. E.; Rontogiannis, A. A.; and Koutroumbas, K. D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1342-1346, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Group-sparsePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952468,\n  author = {K. E. Themelis and A. A. Rontogiannis and K. D. Koutroumbas},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Group-sparse adaptive variational Bayes estimation},\n  year = {2014},\n  pages = {1342-1346},\n  abstract = {This paper presents a new variational Bayes algorithm for the adaptive estimation of signals possessing group structured sparsity. The proposed algorithm can be considered as an extension of a recently proposed variational Bayes framework of adaptive algorithms that utilize heavy tailed priors (such as the Student-t distribution) to impose sparsity. Variational inference is efficiently implemented via appropriate time recursive equations for all model parameters. Experimental results are provided that demonstrate the improved estimation performance of the proposed adaptive group sparse variational Bayes method, when compared to state-of-the-art sparse adaptive algorithms.},\n  keywords = {adaptive estimation;Bayes methods;compressed sensing;recursive estimation;variational techniques;adaptive group sparse variational Bayes method;time recursive equations;variational inference;heavy tailed priors;adaptive algorithms;group structured sparsity;adaptive estimation;variational Bayes algorithm;Abstracts;Bismuth;Manganese;Sparse matrices;Optimization;Mobile communication;variational Bayes;structured sparsity;adaptive estimation;group sparse Bayesian learning},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925049.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a new variational Bayes algorithm for the adaptive estimation of signals possessing group structured sparsity. The proposed algorithm can be considered as an extension of a recently proposed variational Bayes framework of adaptive algorithms that utilize heavy tailed priors (such as the Student-t distribution) to impose sparsity. Variational inference is efficiently implemented via appropriate time recursive equations for all model parameters. Experimental results are provided that demonstrate the improved estimation performance of the proposed adaptive group sparse variational Bayes method, when compared to state-of-the-art sparse adaptive algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian optimal compressed sensing without priors: Parametric sure approximate message passing.\n \n \n \n \n\n\n \n Guo, C.; and Davies, M. E.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1347-1351, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952469,\n  author = {C. Guo and M. E. Davies},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian optimal compressed sensing without priors: Parametric sure approximate message passing},\n  year = {2014},\n  pages = {1347-1351},\n  abstract = {It has been shown that the Bayesian optimal approximate message passing (AMP) technique achieves the minimum mean-squared error (MMSE) optimal compressed sensing (CS) recovery. However, the prerequisite of the signal prior makes it often impractical. To address this dilemma, we propose the parametric SURE-AMP algorithm. The key feature is it uses the Stein's unbiased risk estimate (SURE) based parametric family of MMSE estimator for the CS denoising. Given that the optimization of the estimator and the calculation of its mean squared error purely depend on the noisy data, there is no need of the signal prior. The weighted sum of piecewise kernel functions is used to form the parametric estimator. Numerical experiments on both Bernoulli-Gaussian and k-dense signal justify our proposal.},\n  keywords = {Bayes methods;compressed sensing;least mean squares methods;message passing;parameter estimation;signal denoising;Bayesian optimal compressed sensing;parametric SURE approximate message passing;Bayesian optimal approximate message passing technique;minimum mean-squared error optimal compressed sensing recovery;MMSE-CS;parametric SURE-AMP algorithm;Stein unbiased risk estimate;CS denoising;MMSE estimator;noisy data;piecewise kernel functions;parametric estimator;k-dense signal;Bernoulli-Gaussian signal;signal prior;estimator optimization;Noise measurement;Noise reduction;Fasteners;Noise;Bayes methods;Optimization;Educational institutions;Compressed sensing;approximate message passing;SURE estimator;denoising},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924565.pdf},\n}\n\n
\n
\n\n\n
\n It has been shown that the Bayesian optimal approximate message passing (AMP) technique achieves the minimum mean-squared error (MMSE) optimal compressed sensing (CS) recovery. However, the prerequisite of the signal prior makes it often impractical. To address this dilemma, we propose the parametric SURE-AMP algorithm. The key feature is it uses the Stein's unbiased risk estimate (SURE) based parametric family of MMSE estimator for the CS denoising. Given that the optimization of the estimator and the calculation of its mean squared error purely depend on the noisy data, there is no need of the signal prior. The weighted sum of piecewise kernel functions is used to form the parametric estimator. Numerical experiments on both Bernoulli-Gaussian and k-dense signal justify our proposal.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Room reflections assisted spatial sound field reproduction.\n \n \n \n\n\n \n Samarasinghe, P. N.; Abhayapala, T. D.; and Poletti, M. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1352-1356, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952470,\n  author = {P. N. Samarasinghe and T. D. Abhayapala and M. A. Poletti},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Room reflections assisted spatial sound field reproduction},\n  year = {2014},\n  pages = {1352-1356},\n  abstract = {With recent advances in surround sound technology, an increased interest is shown in the problem of virtual sound reproduction. However, the performance of existing surround sound systems are degraded by factors like room reverberation and listener movements. In this paper, we develop a novel approach to spatial sound reproduction in reverberant environments, where room reverberation is constructively incorporated with the direct source signals to recreate a virtual reality. We also show that the array of monopole loudspeakers required for reproduction can be clustered together in a small spatial region away from the listening area, which in turn enables the array's practical implementation via a single loudspeaker unit with multiple drivers.},\n  keywords = {loudspeakers;reverberation;room reflections;spatial sound field reproduction;virtual sound reproduction;room reverberation;listener movements;to spatial sound reproduction;virtual reality;direct source signals;monopole loudspeakers;Loudspeakers;Arrays;Receivers;Reverberation;Transfer functions;Accuracy;Robustness},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n With recent advances in surround sound technology, an increased interest is shown in the problem of virtual sound reproduction. However, the performance of existing surround sound systems are degraded by factors like room reverberation and listener movements. In this paper, we develop a novel approach to spatial sound reproduction in reverberant environments, where room reverberation is constructively incorporated with the direct source signals to recreate a virtual reality. We also show that the array of monopole loudspeakers required for reproduction can be clustered together in a small spatial region away from the listening area, which in turn enables the array's practical implementation via a single loudspeaker unit with multiple drivers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nonlinear distortion reduction for electrodynamic loudspeaker using nonlinear filtering.\n \n \n \n \n\n\n \n Iwai, K.; and Kajikawa, Y.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1357-1361, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"NonlinearPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952471,\n  author = {K. Iwai and Y. Kajikawa},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Nonlinear distortion reduction for electrodynamic loudspeaker using nonlinear filtering},\n  year = {2014},\n  pages = {1357-1361},\n  abstract = {In this paper, we compare the efficiency of compensating nonlinear distortions in electrodynamic loudspeaker system using 2nd- and 3rd-order nonlinear IIR filters. These filters need nonlinear parameters of loudspeaker systems and we used estimated nonlinear parameters for evaluating the efficiency of compensating nonlinear distortions of these filters. Therefore, these evaluation results include the effect of the parameter estimation method. In this paper, we measure the nonlinear parameters using Klippel's measurement equipment and evaluate the compensation amount of both filters. Experimental results demonstrate that the 3rd-order nonlinear IIR filter can realize a reduction by 4dB more than the 2nd-order nonlinear IIR filter on nonlinear distortions at high frequencies.},\n  keywords = {IIR filters;loudspeakers;nonlinear distortion;nonlinear filters;parameter estimation;nonlinear distortion reduction;electrodynamic loudspeaker;nonlinear filtering;nonlinear distortion compensation efficiency;2nd-order nonlinear IIR filters;3rd-order nonlinear IIR filters;nonlinear parameters;loudspeaker systems;nonlinear parameter estimation;Klippel measurement equipment;Loudspeakers;Nonlinear distortion;Distortion measurement;Mirrors;Ear;Force;Equations;Loudspeaker system;Nonlinear distortion;Nonlinear IIR filter;Mirror filter},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569906919.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we compare the efficiency of compensating nonlinear distortions in electrodynamic loudspeaker system using 2nd- and 3rd-order nonlinear IIR filters. These filters need nonlinear parameters of loudspeaker systems and we used estimated nonlinear parameters for evaluating the efficiency of compensating nonlinear distortions of these filters. Therefore, these evaluation results include the effect of the parameter estimation method. In this paper, we measure the nonlinear parameters using Klippel's measurement equipment and evaluate the compensation amount of both filters. Experimental results demonstrate that the 3rd-order nonlinear IIR filter can realize a reduction by 4dB more than the 2nd-order nonlinear IIR filter on nonlinear distortions at high frequencies.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perceptually optimized room-in-room sound reproduction with spatially distributed loudspeakers.\n \n \n \n \n\n\n \n Grosse, J.; and van de Par , S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1362-1366, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PerceptuallyPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952472,\n  author = {J. Grosse and S. {van de Par}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Perceptually optimized room-in-room sound reproduction with spatially distributed loudspeakers},\n  year = {2014},\n  pages = {1362-1366},\n  abstract = {In sound reproduction it is desired to reproduce a recording of an instrument made in a specific room (e.g. a church or concert hall) in a playback room such that the listener has a plausible and authentic impression of the instrument including the room acoustical properties of the recording room. For this purpose a new method is presented that separately optimizes the direct sound field and recreates a reverberant sound field in the playback room that matches that of the recording room. This approach optimizes monaural cues related to coloration and the interaural cross correlation (IACC), responsible for listener envelopment, in both rooms based on an artificial head placed at the listener's positions. The cues are adjusted using an auditorily motivated gammatone analysis-synthesis filterbank. A MUSHRA listening test revealed that the proposed method is able to recreate the perceived room acoustics of the recording room in an accurate way.},\n  keywords = {architectural acoustics;channel bank filters;loudspeakers;recording;reverberation chambers;sound reproduction;room-in-room sound reproduction;spatially distributed loudspeakers;plausible impression;authentic impression;room acoustical properties;reverberant sound field;playback room;recording room;monaural cues;interaural cross correlation;IACC;listener envelopment;artificial head;auditorily motivated gammatone analysis-synthesis filterbank;MUSHRA listening test;room acoustics;perceptual optimization;Loudspeakers;Optimized production technology;Rendering (computer graphics);Reverberation;Correlation;virtual acoustics;perceptual optimization;Room-in-Room reproduction},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925523.pdf},\n}\n\n
\n
\n\n\n
\n In sound reproduction it is desired to reproduce a recording of an instrument made in a specific room (e.g. a church or concert hall) in a playback room such that the listener has a plausible and authentic impression of the instrument including the room acoustical properties of the recording room. For this purpose a new method is presented that separately optimizes the direct sound field and recreates a reverberant sound field in the playback room that matches that of the recording room. This approach optimizes monaural cues related to coloration and the interaural cross correlation (IACC), responsible for listener envelopment, in both rooms based on an artificial head placed at the listener's positions. The cues are adjusted using an auditorily motivated gammatone analysis-synthesis filterbank. A MUSHRA listening test revealed that the proposed method is able to recreate the perceived room acoustics of the recording room in an accurate way.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Least-mean-square weighted parallel IIR filters in active-noise-control headphones.\n \n \n \n \n\n\n \n Guldenschuh, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1367-1371, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Least-mean-squarePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952493,\n  author = {M. Guldenschuh},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Least-mean-square weighted parallel IIR filters in active-noise-control headphones},\n  year = {2014},\n  pages = {1367-1371},\n  abstract = {Adaptive filters in noise control applications have to approximate the primary path and compensate for the secondary-path. This work shows that the primary- and secondary-path variations of noise control headphones depend above all on the direction of incident noise and the tightness of the ear-cups. Both kind of variations are investigated by preliminary measurements, and it is further shown that the measured variations can be approximated with the linear combination of only a few prototype filters. Thus, a parallel adaptive linear combiner is suggested instead of the typical adaptive transversal-filter. Theoretical considerations and experimental results reveal that the parallel structure performs equally well, converges even faster, and requires fewer adaptation weights.},\n  keywords = {active noise control;adaptive filters;headphones;IIR filters;least mean squares methods;transversal filters;least-mean-square weighted parallel IIR filters;active-noise-control headphones;secondary-path variations;primary-path variations;incident noise;ear-cups;parallel adaptive linear combiner;adaptive transversal-filter;parallel structure;adaptation weights;Headphones;Noise;Finite impulse response filters;Principal component analysis;Vectors;Least squares approximations;Microphones;Adaptive linear combiner;adaptive filter;noise control headphones},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569909681.pdf},\n}\n\n
\n
\n\n\n
\n Adaptive filters in noise control applications have to approximate the primary path and compensate for the secondary-path. This work shows that the primary- and secondary-path variations of noise control headphones depend above all on the direction of incident noise and the tightness of the ear-cups. Both kind of variations are investigated by preliminary measurements, and it is further shown that the measured variations can be approximated with the linear combination of only a few prototype filters. Thus, a parallel adaptive linear combiner is suggested instead of the typical adaptive transversal-filter. Theoretical considerations and experimental results reveal that the parallel structure performs equally well, converges even faster, and requires fewer adaptation weights.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A unified approach to numerical auditory scene synthesis using loudspeaker arrays.\n \n \n \n \n\n\n \n Atkins, J.; Nawfal, I.; and Giacobello, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1372-1376, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952494,\n  author = {J. Atkins and I. Nawfal and D. Giacobello},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A unified approach to numerical auditory scene synthesis using loudspeaker arrays},\n  year = {2014},\n  pages = {1372-1376},\n  abstract = {In this work we address the problem of simulating the spatial and timbral cues of a given sound event, or auditory scene, using an array of loudspeakers. We first define the problem with a general numerical framework that encompasses many known techniques from physical acoustics, crosstalk cancellation, and acoustic control. In contrast to many previous approaches, the system described in this work is inherently broadband as it jointly designs a set of spatio-temporal filters while allowing for constraints in other domains. With this framework we show similarities and differences between known techniques and suggest some new, unexplored methods. In particular, we focus on perceptually motivated choices for the cost function and regularization. These methods are then compared by implementing the systems on a linear array of loudspeakers and evaluating the timbral and spatial qualities of the system using objective metrics.},\n  keywords = {crosstalk;filters;loudspeakers;physical acoustics;crosstalk cancellation;acoustic control;spatio-temporal filters;timbral cues;spatial cues;loudspeaker arrays;numerical auditory scene synthesis;Loudspeakers;Acoustics;Discrete Fourier transforms;Optimization;Vectors;Frequency-domain analysis;spatial audio;crosstalk cancellation;binaural hearing;equivalent source method;mode-matching},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925705.pdf},\n}\n\n
\n
\n\n\n
\n In this work we address the problem of simulating the spatial and timbral cues of a given sound event, or auditory scene, using an array of loudspeakers. We first define the problem with a general numerical framework that encompasses many known techniques from physical acoustics, crosstalk cancellation, and acoustic control. In contrast to many previous approaches, the system described in this work is inherently broadband as it jointly designs a set of spatio-temporal filters while allowing for constraints in other domains. With this framework we show similarities and differences between known techniques and suggest some new, unexplored methods. In particular, we focus on perceptually motivated choices for the cost function and regularization. These methods are then compared by implementing the systems on a linear array of loudspeakers and evaluating the timbral and spatial qualities of the system using objective metrics.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Anti-forensic resistant likelihood ratio computation: A case study using fingerprint biometrics.\n \n \n \n\n\n \n Poh, N.; Suki, N.; Iorliam, A.; and Ho, A. T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1377-1381, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952495,\n  author = {N. Poh and N. Suki and A. Iorliam and A. T. Ho},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Anti-forensic resistant likelihood ratio computation: A case study using fingerprint biometrics},\n  year = {2014},\n  pages = {1377-1381},\n  abstract = {One of the major utilities of biometrics in the context of crime scene investigation is to identify people. However, in the most sophisticated cases, criminals may introduce the biometric samples of innocent individuals in order to evade their own identities as well as to incriminate the innocent individuals. To date, even a minute suspect of an anti-forensic threat can potentially jeopardize any forensic investigation to the point that a potentially vital piece of evidence suddenly becomes powerless in the court of law. In order to remedy this situation, we propose an anti-forensic resistant likelihood ratio computation that renders the strength of evidence to a level that is proportional to the trustworthiness of the trace, such that a highly credible evidence will bear its full strength of evidence whilst a highly suspicious trace can have its strength of evidence reduced to naught. Using simulation as well as a spoof fingerprint database, we show that the existing likelihood ratio computation is extremely vulnerable to an anti-forensic threat whereas our proposed computation is robust to it, thereby striking the balance between the utility and threat of a trace.},\n  keywords = {fingerprint identification;image forensics;strength of evidence;spoof fingerprint database;trustworthiness;anti-forensic threat;fingerprint biometrics;anti-forensic resistant likelihood ratio computation;Forensics;Biometrics (access control);Immune system;Databases;Materials;Biological system modeling;Training;Anti-forensic;likelihood ratios;tampered images;trustworthiness;strength of evidence},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n One of the major utilities of biometrics in the context of crime scene investigation is to identify people. However, in the most sophisticated cases, criminals may introduce the biometric samples of innocent individuals in order to evade their own identities as well as to incriminate the innocent individuals. To date, even a minute suspect of an anti-forensic threat can potentially jeopardize any forensic investigation to the point that a potentially vital piece of evidence suddenly becomes powerless in the court of law. In order to remedy this situation, we propose an anti-forensic resistant likelihood ratio computation that renders the strength of evidence to a level that is proportional to the trustworthiness of the trace, such that a highly credible evidence will bear its full strength of evidence whilst a highly suspicious trace can have its strength of evidence reduced to naught. Using simulation as well as a spoof fingerprint database, we show that the existing likelihood ratio computation is extremely vulnerable to an anti-forensic threat whereas our proposed computation is robust to it, thereby striking the balance between the utility and threat of a trace.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Biometric source weighting in multi-biometric fusion: Towards a generalized and robust solution.\n \n \n \n \n\n\n \n Damer, N.; Opel, A.; and Nouak, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1382-1386, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"BiometricPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952496,\n  author = {N. Damer and A. Opel and A. Nouak},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Biometric source weighting in multi-biometric fusion: Towards a generalized and robust solution},\n  year = {2014},\n  pages = {1382-1386},\n  abstract = {This work presents a new weighting algorithm for biometric sources within a score-level multi-biometric system. Those weights are used in the effective and widely used weighted sum fusion rule to produce multi-biometric decisions. The presented solution is mainly based on the characteristic of the overlap region between the genuine and imposter scores distributions. It also integrates the performance of the biometric source represented by its equal error rate. This solution aims at avoiding the shortcomings of previously proposed solutions such as low generalization abilities and sensitiveness to outliers. The proposed solution is evaluated along with the state of the art and best practice techniques. The evaluation was performed on two databases, the Biometric Scores Set BSSR1 and the Extended Multi Modal Verification for Teleservices and Security applications database and a satisfying and stable performance was achieved.},\n  keywords = {biometrics (access control);database management systems;formal verification;security of data;set theory;biometric source weighting;multibiometric fusion;score-level multibiometric system;weighted sum fusion rule;imposter scores distributions;biometric scores set;BSSR1;extended multi modal verification;teleservices;security applications database;Databases;Face;Error analysis;Authentication;Training data;Speech;Robustness;Multi-biometric fusion;Biometric source weighting;Score-level fusion},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569927051.pdf},\n}\n\n
\n
\n\n\n
\n This work presents a new weighting algorithm for biometric sources within a score-level multi-biometric system. Those weights are used in the effective and widely used weighted sum fusion rule to produce multi-biometric decisions. The presented solution is mainly based on the characteristic of the overlap region between the genuine and imposter scores distributions. It also integrates the performance of the biometric source represented by its equal error rate. This solution aims at avoiding the shortcomings of previously proposed solutions such as low generalization abilities and sensitiveness to outliers. The proposed solution is evaluated along with the state of the art and best practice techniques. The evaluation was performed on two databases, the Biometric Scores Set BSSR1 and the Extended Multi Modal Verification for Teleservices and Security applications database and a satisfying and stable performance was achieved.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Presentation attack detection algorithm for face and iris biometrics.\n \n \n \n \n\n\n \n Raghavendra, R.; and Busch, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1387-1391, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PresentationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952497,\n  author = {R. Raghavendra and C. Busch},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Presentation attack detection algorithm for face and iris biometrics},\n  year = {2014},\n  pages = {1387-1391},\n  abstract = {Biometric systems are vulnerable to the diverse attacks that emerged as a challenge to assure the reliability in adopting these systems in real-life scenario. In this work, we propose a novel solution to detect a presentation attack based on exploring both statistical and Cepstral features. The proposed Presentation Attack Detection (PAD) algorithm will extract the statistical features that can capture the micro-texture variation using Binarized Statistical Image Features (BSIF) and Cepstral features that can reflect the micro changes in frequency using 2D Cepstrum analysis. We then fuse these features to form a single feature vector before making a decision on whether a capture attempt is a normal presentation or an artefact presentation using linear Support Vector Machine (SVM). Extensive experiments carried out on a publicly available face and iris spoof database show the efficacy of the proposed PAD algorithm with an Average Classification Error Rate (ACER) = 10.21% on face and ACER = 0% on the iris biometrics.},\n  keywords = {cepstral analysis;error statistics;face recognition;iris recognition;reliability;statistical analysis;support vector machines;presentation attack detection algorithm;face biometrics;iris biometrics;biometric systems;reliability;statistical feature;PAD algorithm;microtexture variation;binarized statistical image features;BSIF;cepstral features;2D cepstrum analysis;single feature vector;normal presentation;artefact presentation;linear support vector machine;SVM;face spoof database;iris spoof database;average classification error rate;ACER;Iris recognition;Face;Feature extraction;Databases;Cepstrum;Cameras;Biometrics;Spoof;Attack detection;Face;Iris},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924585.pdf},\n}\n\n
\n
\n\n\n
\n Biometric systems are vulnerable to the diverse attacks that emerged as a challenge to assure the reliability in adopting these systems in real-life scenario. In this work, we propose a novel solution to detect a presentation attack based on exploring both statistical and Cepstral features. The proposed Presentation Attack Detection (PAD) algorithm will extract the statistical features that can capture the micro-texture variation using Binarized Statistical Image Features (BSIF) and Cepstral features that can reflect the micro changes in frequency using 2D Cepstrum analysis. We then fuse these features to form a single feature vector before making a decision on whether a capture attempt is a normal presentation or an artefact presentation using linear Support Vector Machine (SVM). Extensive experiments carried out on a publicly available face and iris spoof database show the efficacy of the proposed PAD algorithm with an Average Classification Error Rate (ACER) = 10.21% on face and ACER = 0% on the iris biometrics.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On identification from periocular region utilizing SIFT and SURF.\n \n \n \n \n\n\n \n Karahan, Ş.; Karaöz, A.; Özdemir, Ö. F.; Gü, A. G.; and Uludag, U.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1392-1396, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952498,\n  author = {Ş. Karahan and A. Karaöz and Ö. F. Özdemir and A. G. Gü and U. Uludag},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {On identification from periocular region utilizing SIFT and SURF},\n  year = {2014},\n  pages = {1392-1396},\n  abstract = {We concentrate on utilization of facial periocular region for biometric identification. Although this region has superior discriminative characteristics, as compared to mouth and nose, it has not been frequently used as an independent modality for personal identification. We employ a feature-based representation, where the associated periocular image is divided into left and right sides, and descriptor vectors are extracted from these using popular feature extraction algorithms SIFT, SURF, BRISK, ORB, and LBP. We also concatenate descriptor vectors. Utilizing FLANN and Brute Force matchers, we report recognition rates and ROC. For the periocular region image data, obtained from widely used FERET database consisting of 865 subjects, we obtain Rank-1 recognition rate of 96.8% for full frontal and different facial expressions in same session cases. We include a summary of existing methods, and show that the proposed method produces lower/comparable error rates with respect to the current state of the art.},\n  keywords = {biometrics (access control);face recognition;feature extraction;periocular region;SIFT;SURF;biometric identification;personal identification;feature based representation;associated periocular image;descriptor vectors;feature extraction;BRISK;ORB;LBP;Feature extraction;Vectors;Databases;Face;Detectors;Computer vision;Face recognition;Face;Periocular Region;SIFT;SURF;BRISK;ORB;LBP;FLANN;Brute-Force Matcher;FERET;Identification},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925709.pdf},\n}\n\n
\n
\n\n\n
\n We concentrate on utilization of facial periocular region for biometric identification. Although this region has superior discriminative characteristics, as compared to mouth and nose, it has not been frequently used as an independent modality for personal identification. We employ a feature-based representation, where the associated periocular image is divided into left and right sides, and descriptor vectors are extracted from these using popular feature extraction algorithms SIFT, SURF, BRISK, ORB, and LBP. We also concatenate descriptor vectors. Utilizing FLANN and Brute Force matchers, we report recognition rates and ROC. For the periocular region image data, obtained from widely used FERET database consisting of 865 subjects, we obtain Rank-1 recognition rate of 96.8% for full frontal and different facial expressions in same session cases. We include a summary of existing methods, and show that the proposed method produces lower/comparable error rates with respect to the current state of the art.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A multivariate Singular Spectrum Analysis approach to clinically-motivated movement biometrics.\n \n \n \n \n\n\n \n Lee, T. K. M.; Gan, S. S. W.; Lim, J. G.; and Sanei, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1397-1401, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952499,\n  author = {T. K. M. Lee and S. S. W. Gan and J. G. Lim and S. Sanei},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A multivariate Singular Spectrum Analysis approach to clinically-motivated movement biometrics},\n  year = {2014},\n  pages = {1397-1401},\n  abstract = {Biometrics are quantities obtained from analyses of biological measurements. For human based biometrics, the two main types are clinical and authentication. This paper presents a brief comparison between the two, showing that on many occasions clinical biometrics can motivate for its use in authentication applications. Since several clinical biometrics deal with temporal data and also involve several dimensions of movement, we also present a new application of Singular Spectrum Analysis, in particular its multivariate version, to obtain significant frequency information across these dimensions. We use the most significant frequency component as a biometric to distinguish between various types of human movements. The signals were collected from triaxial accelerometers mounted in an object that is handled by a user. Although this biometric was obtained in a clinical setting, it shows promise for authentication.},\n  keywords = {biometrics (access control);medical signal processing;spectral analysis;multivariate singular spectrum analysis;clinically-motivated movement biometrics;authentication applications;frequency information;triaxial accelerometers;Biometrics (access control);Eigenvalues and eigenfunctions;Accelerometers;Authentication;Spectral analysis;Muscles;Time series analysis;Multivariate singular spectrum analysis;accelerometer;biometrics;instrumented objects;eigenvalues},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569918849.pdf},\n}\n\n
\n
\n\n\n
\n Biometrics are quantities obtained from analyses of biological measurements. For human based biometrics, the two main types are clinical and authentication. This paper presents a brief comparison between the two, showing that on many occasions clinical biometrics can motivate for its use in authentication applications. Since several clinical biometrics deal with temporal data and also involve several dimensions of movement, we also present a new application of Singular Spectrum Analysis, in particular its multivariate version, to obtain significant frequency information across these dimensions. We use the most significant frequency component as a biometric to distinguish between various types of human movements. The signals were collected from triaxial accelerometers mounted in an object that is handled by a user. Although this biometric was obtained in a clinical setting, it shows promise for authentication.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A state-space approach to modeling functional time series application to rail supervision.\n \n \n \n \n\n\n \n Samé, A.; and El-Assaad, H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1402-1406, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952500,\n  author = {A. Samé and H. El-Assaad},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A state-space approach to modeling functional time series application to rail supervision},\n  year = {2014},\n  pages = {1402-1406},\n  abstract = {This article introduces a state-space model for the dynamic modeling of curve sequences within the framework of railway switches online monitoring. In this context, each curve has the peculiarity of being subject to multiple changes in regime. The proposed model consists of a specific latent variable regression model whose coefficients are supposed to evolve dynamically in the course of time. Its parameters are recursively estimated across a sequence of curves through an online Expectation-Maximization (EM) algorithm. The experimental study conducted on two real power consumption curve sequences from the French high speed network has shown encouraging results.},\n  keywords = {condition monitoring;expectation-maximisation algorithm;power consumption;railways;recursive estimation;regression analysis;state-space methods;switches;time series;state-space model;curve sequence dynamic modeling;functional time series modeling application;rail supervision;railway switches online monitoring;specific latent variable regression model;recursive estimation;online expectation maximization algorithm;online EM algorithm;power consumption curve sequence;French high speed network;Power demand;Logistics;Mathematical model;Vectors;Monitoring;Rail transportation;Time series analysis;Time series of functional data;state-space model;Kalman filtering;online Expectation-Maximization (EM) algorithm;condition monitoring},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569919069.pdf},\n}\n\n
\n
\n\n\n
\n This article introduces a state-space model for the dynamic modeling of curve sequences within the framework of railway switches online monitoring. In this context, each curve has the peculiarity of being subject to multiple changes in regime. The proposed model consists of a specific latent variable regression model whose coefficients are supposed to evolve dynamically in the course of time. Its parameters are recursively estimated across a sequence of curves through an online Expectation-Maximization (EM) algorithm. The experimental study conducted on two real power consumption curve sequences from the French high speed network has shown encouraging results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Non-Redundant Gradient Semantic Local Binary Patterns for pedestrian detection.\n \n \n \n \n\n\n \n Xu, J.; Jiang, N.; and Goto, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1407-1411, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Non-RedundantPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952501,\n  author = {J. Xu and N. Jiang and S. Goto},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Non-Redundant Gradient Semantic Local Binary Patterns for pedestrian detection},\n  year = {2014},\n  pages = {1407-1411},\n  abstract = {In this paper, a feature named Non-Redundant Gradient Semantic Local Binary Patterns (NRGSLBP) is proposed for pedestrian detection as a modified version of conventional Semantic Local Binary Patterns (SLBP). Calculations of this feature are carried out for both intensity and gradient magnitude image so that texture and gradient information are combined. Moreover, non-redundant patterns are adopted on SLBP for the first time, allowing better discrimination. Compared with SLBP, no additional cost of the feature dimensions NRGSLBP is necessary and the calculation complexity is considerably smaller than that of other features. Experimental results on several datasets show that the detection rate of our proposed feature outperforms those of other features such as Histogram of Orientated Gradient (HOG), Histogram of Templates (HOT), Bidirectional Local Template Patterns (BLTP), Gradient Local Binary Patterns (GLBP), SLBP and Covariance matrix (COV).},\n  keywords = {feature extraction;image texture;object detection;pedestrian detection;nonredundant gradient semantic local binary patterns;NRGSLBP;gradient magnitude image;gradient information;texture information;histogram-of-orientated gradient;HOG;histogram-of-templates;HOT;bidirectional local template patterns;BLTP;gradient local binary patterns;GLBP;covariance matrix;COV;feature extraction;Feature extraction;Histograms;Training;Support vector machines;Semantics;Kernel;Computer vision;Pedestrian detection;feature extraction;non-redundant gradient semantic local binary patterns},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923901.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a feature named Non-Redundant Gradient Semantic Local Binary Patterns (NRGSLBP) is proposed for pedestrian detection as a modified version of conventional Semantic Local Binary Patterns (SLBP). Calculations of this feature are carried out for both intensity and gradient magnitude image so that texture and gradient information are combined. Moreover, non-redundant patterns are adopted on SLBP for the first time, allowing better discrimination. Compared with SLBP, no additional cost of the feature dimensions NRGSLBP is necessary and the calculation complexity is considerably smaller than that of other features. Experimental results on several datasets show that the detection rate of our proposed feature outperforms those of other features such as Histogram of Orientated Gradient (HOG), Histogram of Templates (HOT), Bidirectional Local Template Patterns (BLTP), Gradient Local Binary Patterns (GLBP), SLBP and Covariance matrix (COV).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modelling temporal variations by polynomial regression for classification of radar tracks.\n \n \n \n \n\n\n \n Jochumsen, L. W.; Østergaard, J.; Jensen, S. H.; and Pedersen, M. Ø.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1412-1416, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ModellingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952502,\n  author = {L. W. Jochumsen and J. Østergaard and S. H. Jensen and M. Ø. Pedersen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Modelling temporal variations by polynomial regression for classification of radar tracks},\n  year = {2014},\n  pages = {1412-1416},\n  abstract = {The sampling rate of a radar is often too low to reliably capture the acceleration of moving targets such as birds. Moreover, the sampling rate depends upon the target's acceleration and heading and will therefore generally be time varying. When classifying radar tracks using temporal features, too low or highly varying sampling rates deteriorates the classifier's performance. In this work, we propose to model the temporal variations of the target's speed by low-order polynomial regression. Using the polynomial we obtain the conditional statistics of the targets speed at some future time given its speed at the current time. When used in a classifier based on Gaussian mixture models and with real radar data, it is shown that the inclusions of conditional statistics describing the targets temporal variations, leads to a substantial improvement in the overall classification performance.},\n  keywords = {Gaussian processes;mixture models;polynomials;radar tracking;regression analysis;signal classification;target tracking;Gaussian mixture model;conditional statistics;target heading;moving target acceleration;sampling rate;radar track classification;polynomial regression;temporal variation modelling;Radar tracking;Target tracking;Birds;Marine vehicles;Acceleration;Radar cross-sections;Automatic target classification;Machine learning;Radar;Surveillance},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924707.pdf},\n}\n\n
\n
\n\n\n
\n The sampling rate of a radar is often too low to reliably capture the acceleration of moving targets such as birds. Moreover, the sampling rate depends upon the target's acceleration and heading and will therefore generally be time varying. When classifying radar tracks using temporal features, too low or highly varying sampling rates deteriorates the classifier's performance. In this work, we propose to model the temporal variations of the target's speed by low-order polynomial regression. Using the polynomial we obtain the conditional statistics of the targets speed at some future time given its speed at the current time. When used in a classifier based on Gaussian mixture models and with real radar data, it is shown that the inclusions of conditional statistics describing the targets temporal variations, leads to a substantial improvement in the overall classification performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint blind source separation of multidimensional components: Model and algorithm.\n \n \n \n \n\n\n \n Lahat, D.; and Jutten, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1417-1421, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952503,\n  author = {D. Lahat and C. Jutten},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Joint blind source separation of multidimensional components: Model and algorithm},\n  year = {2014},\n  pages = {1417-1421},\n  abstract = {This paper deals with joint blind source separation (JBSS) of multidimensional components. JBSS extends classical BSS to simultaneously resolve several BSS problems by assuming statistical dependence between latent sources across mixtures. JBSS offers some significant advantages over BSS, such as identifying more than one Gaussian white stationary source within a mixture. Multidimensional BSS extends classical BSS to deal with a more general and more flexible model within each mixture: the sources can be partitioned into groups exhibiting dependence within a given group but independence between two different groups. Motivated by various applications, we present a model that is inspired by both extensions. We derive an algorithm that achieves asymptotically the minimal mean square error (MMSE) in the estimation of Gaussian multidimensional components. We demonstrate the superior performance of this model over a two-step approach, in which JBSS, which ignores the multidimensional structure, is followed by a clustering step.},\n  keywords = {blind source separation;Gaussian processes;least mean squares methods;joint blind source separation;JBSS;statistical dependence;latent sources;minimal mean square error;MMSE;Gaussian multidimensional components;two-step approach;Vectors;Joints;Data models;Convergence;Clustering algorithms;Blind source separation;Joint BSS;independent vector analysis;multidimensional ICA;independent subspace analysis},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924803.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with joint blind source separation (JBSS) of multidimensional components. JBSS extends classical BSS to simultaneously resolve several BSS problems by assuming statistical dependence between latent sources across mixtures. JBSS offers some significant advantages over BSS, such as identifying more than one Gaussian white stationary source within a mixture. Multidimensional BSS extends classical BSS to deal with a more general and more flexible model within each mixture: the sources can be partitioned into groups exhibiting dependence within a given group but independence between two different groups. Motivated by various applications, we present a model that is inspired by both extensions. We derive an algorithm that achieves asymptotically the minimal mean square error (MMSE) in the estimation of Gaussian multidimensional components. We demonstrate the superior performance of this model over a two-step approach, in which JBSS, which ignores the multidimensional structure, is followed by a clustering step.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Balance learning to rank in big data.\n \n \n \n \n\n\n \n Cao, G.; Ahmad, I.; Zhang, H.; Xie, W.; and Gabbouj, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1422-1426, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"BalancePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952504,\n  author = {G. Cao and I. Ahmad and H. Zhang and W. Xie and M. Gabbouj},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Balance learning to rank in big data},\n  year = {2014},\n  pages = {1422-1426},\n  abstract = {We propose a distributed learning to rank method, and demonstrate its effectiveness in web-scale image retrieval. With the increasing amount of data, it is not applicable to train a centralized ranking model for any large scale learning problems. In distributed learning, the discrepancy between the training subsets and the whole when building the models are non-trivial but overlooked in the previous work. In this paper, we firstly include a cost factor to boosting algorithms to balance the individual models toward the whole data. Then, we propose to decompose the original algorithm to multiple layers, and their aggregation forms a superior ranker which can be easily scaled up to billions of images. The extensive experiments show the proposed method outperforms the straightforward aggregation of boosting algorithms.},\n  keywords = {Big Data;image retrieval;Internet;learning (artificial intelligence);balance learning;big data;distributed learning;Web-scale image retrieval;centralized ranking model;large scale learning problem;boosting algorithm;Boosting;Training;Data models;Big data;Training data;Distributed databases;Bagging;distributed learning;learning to rank;Big Data},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924959.pdf},\n}\n\n
\n
\n\n\n
\n We propose a distributed learning to rank method, and demonstrate its effectiveness in web-scale image retrieval. With the increasing amount of data, it is not applicable to train a centralized ranking model for any large scale learning problems. In distributed learning, the discrepancy between the training subsets and the whole when building the models are non-trivial but overlooked in the previous work. In this paper, we firstly include a cost factor to boosting algorithms to balance the individual models toward the whole data. Then, we propose to decompose the original algorithm to multiple layers, and their aggregation forms a superior ranker which can be easily scaled up to billions of images. The extensive experiments show the proposed method outperforms the straightforward aggregation of boosting algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the need for metrics in dictionary learning assessment.\n \n \n \n \n\n\n \n Chevallier, S.; Barthélemy, Q.; and Atif, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1427-1431, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952505,\n  author = {S. Chevallier and Q. Barthélemy and J. Atif},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {On the need for metrics in dictionary learning assessment},\n  year = {2014},\n  pages = {1427-1431},\n  abstract = {Dictionary-based approaches are the focus of a growing attention in the signal processing community, often achieving state of the art results in several application fields. Albeit their success, the criteria introduced so far for the assessment of their performances suffer from several shortcomings. The scope of this paper is to conduct a thorough analysis of these criteria and to highlight the need for principled criteria, enjoying the properties of metrics. Henceforth we introduce new criteria based on transportation like metrics and discuss their behaviors w.r.t the literature.},\n  keywords = {learning (artificial intelligence);signal processing;dictionary learning assessment;signal processing community;Dictionaries;Training;Atomic measurements;Signal to noise ratio;Transportation;Convergence;Dictionary learning;dictionary recovering;metric;transportation distance;detection rate},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925109.pdf},\n}\n\n
\n
\n\n\n
\n Dictionary-based approaches are the focus of a growing attention in the signal processing community, often achieving state of the art results in several application fields. Albeit their success, the criteria introduced so far for the assessment of their performances suffer from several shortcomings. The scope of this paper is to conduct a thorough analysis of these criteria and to highlight the need for principled criteria, enjoying the properties of metrics. Henceforth we introduce new criteria based on transportation like metrics and discuss their behaviors w.r.t the literature.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A family of hierarchical clustering algorithms based on high-order dissimilarities.\n \n \n \n \n\n\n \n Aidos, H.; and Fred, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1432-1436, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952506,\n  author = {H. Aidos and A. Fred},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A family of hierarchical clustering algorithms based on high-order dissimilarities},\n  year = {2014},\n  pages = {1432-1436},\n  abstract = {Traditional hierarchical techniques are used in many areas of research. However, they require the user to set the number of clusters or use some external criterion to find them. Also, they are unable to identify varying internal structures in classes, i.e. classes can be represented as unions of clusters. To overcome these issues, we propose a family of agglomerative hierarchical methods, which integrates a high-order dissimilarity measure, called dissimilarity increments, in traditional linkage algorithms. Dissimilarity increments are a measure over triplets of nearest neighbors. This family of algorithms is able to automatically find the number of clusters using a minimum description length criterion based on the dissimilarity increments distribution. Moreover, each algorithm of the proposed family is able to find classes as unions of clusters, leading to the identification of internal structures of classes. Experimental results show that any algorithm from the proposed family outperforms the traditional ones.},\n  keywords = {pattern clustering;hierarchical clustering algorithms;high-order dissimilarity measure;internal structure variation identification;agglomerative hierarchical methods;dissimilarity increments distribution;minimum description length criterion;Clustering algorithms;Indexes;Merging;Algorithm design and analysis;Couplings;Data models;Machine learning algorithms;Hierarchical clustering;dissimilarity increments;agglomerative methods},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926777.pdf},\n}\n\n
\n
\n\n\n
\n Traditional hierarchical techniques are used in many areas of research. However, they require the user to set the number of clusters or use some external criterion to find them. Also, they are unable to identify varying internal structures in classes, i.e. classes can be represented as unions of clusters. To overcome these issues, we propose a family of agglomerative hierarchical methods, which integrates a high-order dissimilarity measure, called dissimilarity increments, in traditional linkage algorithms. Dissimilarity increments are a measure over triplets of nearest neighbors. This family of algorithms is able to automatically find the number of clusters using a minimum description length criterion based on the dissimilarity increments distribution. Moreover, each algorithm of the proposed family is able to find classes as unions of clusters, leading to the identification of internal structures of classes. Experimental results show that any algorithm from the proposed family outperforms the traditional ones.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Segmentation and time-frequency analysis of pathological Heart Sound Signals using the EMD method.\n \n \n \n\n\n \n Boutana, D.; Benidir, M.; and Barkat, B.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1437-1441, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952507,\n  author = {D. Boutana and M. Benidir and B. Barkat},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Segmentation and time-frequency analysis of pathological Heart Sound Signals using the EMD method},\n  year = {2014},\n  pages = {1437-1441},\n  abstract = {The Phonocardiogram (PCG) is the graphical representation of acoustic energy due to the mechanical cardiac activity. Sometimes cardiac diseases provide pathological murmurs mixed with the main components of the Heart Sound Signal (HSs). The Empirical Mode Decomposition (EMD) allows decomposing a multicomponent signal into a set of monocomponent signals, called Intrinsic Mode Functions (IMFs). Each IMF represents an oscillatory mode with one instantaneous frequency. The goal of this paper is to segment some pathological HSs by selecting the most appropriate IMFs using the correlation coefficient. Then we extract some time-frequency characteristics considered as useful parameters to distinguish different cases of heart diseases. The experimental results conducted on some real-life pathological HSs such as: Mitral Regurgitation (MR), Aortic Regurgitation (AR) and the Opening Snap (OS) case; revealed the performance of the proposed method.},\n  keywords = {acoustic signal processing;biomechanics;diseases;medical signal processing;phonocardiography;time-frequency analysis;time-frequency analysis;segmentation;pathological heart sound signals;EMD method;phonocardiogram;acoustic energy graphical representation;mechanical cardiac activity;cardiac diseases;pathological murmurs;heart sound signal;empirical mode decomposition;multicomponent signal decomposition;intrinsic mode functions;oscillatory mode;instantaneous frequency;pathological HSs;correlation coefficient;time-frequency characteristics;mitral regurgitation;aortic regurgitation;opening snap case;Pathology;Heart;Time-frequency analysis;Correlation coefficient;Empirical mode decomposition;Phonocardiography;Indexes;Empirical mode decomposition;heart sound signal;pathological murmurs;correlation function},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The Phonocardiogram (PCG) is the graphical representation of acoustic energy due to the mechanical cardiac activity. Sometimes cardiac diseases provide pathological murmurs mixed with the main components of the Heart Sound Signal (HSs). The Empirical Mode Decomposition (EMD) allows decomposing a multicomponent signal into a set of monocomponent signals, called Intrinsic Mode Functions (IMFs). Each IMF represents an oscillatory mode with one instantaneous frequency. The goal of this paper is to segment some pathological HSs by selecting the most appropriate IMFs using the correlation coefficient. Then we extract some time-frequency characteristics considered as useful parameters to distinguish different cases of heart diseases. The experimental results conducted on some real-life pathological HSs such as: Mitral Regurgitation (MR), Aortic Regurgitation (AR) and the Opening Snap (OS) case; revealed the performance of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n An efficient, approximate path-following algorithm for elastic net based nonlinear spike enhancement.\n \n \n \n\n\n \n Little, M. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1442-1446, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952508,\n  author = {M. A. Little},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {An efficient, approximate path-following algorithm for elastic net based nonlinear spike enhancement},\n  year = {2014},\n  pages = {1442-1446},\n  abstract = {Unwanted `spike noise' in a digital signal is a common problem in digital filtering. However, sometimes the spikes are wanted and other, superimposed, signals are unwanted, and linear, time invariant (LTI) filtering is ineffective because the spikes are wideband - overlapping with independent noise in the frequency domain. So, no LTI filter can separate them, necessitating nonlinear filtering. However, there are applications in which the `noise' includes drift or smooth signals for which LTI filters are ideal. We describe a nonlinear filter formulated as the solution to an elastic net regularization problem, which attenuates band-limited signals and independent noise, while enhancing superimposed spikes. Making use of known analytic solutions a novel, approximate path-following algorithm is given that provides a good, filtered output with reduced computational effort by comparison to standard convex optimization methods. Accurate performance is shown on real, noisy electrophysiological recordings of neural spikes.},\n  keywords = {bioelectric phenomena;biology computing;convex programming;frequency-domain analysis;Gaussian noise;nonlinear filters;signal denoising;approximate path-following algorithm;elastic net based nonlinear spike enhancement;digital signal;unwanted spike noise;digital filtering;time invariant filtering;LTI filtering;linear filtering;independent noise;wideband-overlapping;frequency domain;nonlinear filtering;smooth signals;drift signals;elastic net regularization problem;band-limited signals;superimposed spike enhancment;convex optimization methods;noisy electrophysiological recordings;neural spikes;Gaussian noise removal;Noise;Digital filters;Approximation algorithms;Filtering algorithms;Maximum likelihood detection;Nonlinear filters;Filter;regularization;nonlinear;spike;noise},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Unwanted `spike noise' in a digital signal is a common problem in digital filtering. However, sometimes the spikes are wanted and other, superimposed, signals are unwanted, and linear, time invariant (LTI) filtering is ineffective because the spikes are wideband - overlapping with independent noise in the frequency domain. So, no LTI filter can separate them, necessitating nonlinear filtering. However, there are applications in which the `noise' includes drift or smooth signals for which LTI filters are ideal. We describe a nonlinear filter formulated as the solution to an elastic net regularization problem, which attenuates band-limited signals and independent noise, while enhancing superimposed spikes. Making use of known analytic solutions a novel, approximate path-following algorithm is given that provides a good, filtered output with reduced computational effort by comparison to standard convex optimization methods. Accurate performance is shown on real, noisy electrophysiological recordings of neural spikes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A latent variable-based Bayesian regression to address recording replications in Parkinson's Disease.\n \n \n \n \n\n\n \n Pérez, C. J.; Naranjo, L.; Martín, J.; and Campos-Roca, Y.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1447-1451, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952509,\n  author = {C. J. Pérez and L. Naranjo and J. Martín and Y. Campos-Roca},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A latent variable-based Bayesian regression to address recording replications in Parkinson's Disease},\n  year = {2014},\n  pages = {1447-1451},\n  abstract = {Subject-based approaches are proposed to automatically discriminate healthy people from those with Parkinson's Disease (PD) by using speech recordings. These approaches have been applied to one of the most used PD datasets, which contains repeated measurements in an imbalanced design. Most of the published methodologies applied to perform classification from this dataset fail to account for the dependent nature of the data. This fact artificially increases the sample size and leads to a diffuse criterion to define which subject is suffering from PD. The first proposed approach is based on data aggregation. This reduces the sample size, but defines a clear criterion to discriminate subjects. The second one handles repeated measurements by introducing latent variables in a Bayesian logistic regression framework. The proposed approaches are conceptually simple and easy to implement.},\n  keywords = {Bayes methods;diseases;regression analysis;speech;Parkinson disease;subject-based approaches;speech recordings;data aggregation;latent variable;Bayesian logistic regression framework;Accuracy;Speech;Logistics;Bayes methods;Parkinson's disease;Testing;Training;Bayesian logistic regression;Data aggregation;Latent variable;Machine learning;Parkinson's disease;Voice features},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569920821.pdf},\n}\n\n
\n
\n\n\n
\n Subject-based approaches are proposed to automatically discriminate healthy people from those with Parkinson's Disease (PD) by using speech recordings. These approaches have been applied to one of the most used PD datasets, which contains repeated measurements in an imbalanced design. Most of the published methodologies applied to perform classification from this dataset fail to account for the dependent nature of the data. This fact artificially increases the sample size and leads to a diffuse criterion to define which subject is suffering from PD. The first proposed approach is based on data aggregation. This reduces the sample size, but defines a clear criterion to discriminate subjects. The second one handles repeated measurements by introducing latent variables in a Bayesian logistic regression framework. The proposed approaches are conceptually simple and easy to implement.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An automotive wideband stereo acoustic echo canceler using frequency-domain adaptive filtering.\n \n \n \n \n\n\n \n Jung, M.; Elshamy, S.; and Fingscheidt, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1452-1456, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952510,\n  author = {M. Jung and S. Elshamy and T. Fingscheidt},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {An automotive wideband stereo acoustic echo canceler using frequency-domain adaptive filtering},\n  year = {2014},\n  pages = {1452-1456},\n  abstract = {We present an improved state-space frequency-domain acoustic echo canceler (AEC), which makes use of Kalman filtering theory to achieve very good convergence performance, particularly in double talk. Our contribution can be considered threefold: The proposed approach is designed to suit an automotive wideband overlap-save (OLS) setup, to operate best in this distinctive use case. Second, we provide a temporal smoothing and overestimation approach for two particular noise covariance matrices to improve echo return loss enhancement (ERLE) performance. Furthermore, we integrate an adapted perceptually transparent decorrelation preprocessor, which makes use of human insensitivity against appropriately chosen frequency-selective phase modulation, to improve robustness against far-end impulse response changes.},\n  keywords = {adaptive filters;automotive components;covariance matrices;echo;echo suppression;filtering theory;frequency-domain analysis;Kalman filters;mechanical engineering computing;phase modulation;frequency-selective phase modulation;transparent decorrelation preprocessor;echo return loss enhancement;noise covariance matrices;OLS setup;automotive wideband overlap-save setup;Kalman filtering theory;AEC;state-space frequency-domain acoustic echo canceler;frequency-domain adaptive filtering;automotive wideband stereo acoustic echo canceler;Acoustics;Speech;Noise;Frequency-domain analysis;Convergence;Decorrelation;Automotive engineering;AEC;automotive;wideband;FDAF;decorrelation;preprocessor},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569912189.pdf},\n}\n\n
\n
\n\n\n
\n We present an improved state-space frequency-domain acoustic echo canceler (AEC), which makes use of Kalman filtering theory to achieve very good convergence performance, particularly in double talk. Our contribution can be considered threefold: The proposed approach is designed to suit an automotive wideband overlap-save (OLS) setup, to operate best in this distinctive use case. Second, we provide a temporal smoothing and overestimation approach for two particular noise covariance matrices to improve echo return loss enhancement (ERLE) performance. Furthermore, we integrate an adapted perceptually transparent decorrelation preprocessor, which makes use of human insensitivity against appropriately chosen frequency-selective phase modulation, to improve robustness against far-end impulse response changes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A strategy for LF-based glottal-source vocal-tract estimation on stationary modal singing.\n \n \n \n\n\n \n Villavicencio, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1457-1461, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952511,\n  author = {F. Villavicencio},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A strategy for LF-based glottal-source vocal-tract estimation on stationary modal singing},\n  year = {2014},\n  pages = {1457-1461},\n  abstract = {This paper presents a methodology for estimation and modeling of the glottal source and vocal-tract information. The strategy proposes a simplified framework based on the characteristics of stationary singing following a selection of glottal pulse model candidates driven by a single shape parameter. True-Envelope based models are applied, allowing efficient modeling of the observed filter information and accurate cancellation of the glottal source contribution in the spectrum. According to experimental studies on synthetic and real signals the methodology observes adequate approximation of the source and filter information, leading to natural resynthesis quality using synthetic glottal excitation. The proposed estimation framework represents a promising technique for voice transformation on stationary modal voice.},\n  keywords = {approximation theory;filtering theory;speech synthesis;LF-based glottal-source;vocal-tract estimation;stationary modal singing;simplified framework;glottal pulse model;true-envelope based models;natural resynthesis quality;synthetic glottal excitation;Estimation;Harmonic analysis;Approximation methods;Speech;Shape;Discrete Fourier transforms;Time-domain analysis;Speech analysis;speech synthesis;glottal source estimation;vocal-tract estimation},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n This paper presents a methodology for estimation and modeling of the glottal source and vocal-tract information. The strategy proposes a simplified framework based on the characteristics of stationary singing following a selection of glottal pulse model candidates driven by a single shape parameter. True-Envelope based models are applied, allowing efficient modeling of the observed filter information and accurate cancellation of the glottal source contribution in the spectrum. According to experimental studies on synthetic and real signals the methodology observes adequate approximation of the source and filter information, leading to natural resynthesis quality using synthetic glottal excitation. The proposed estimation framework represents a promising technique for voice transformation on stationary modal voice.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Zero Phase speech representation for robust formant tracking.\n \n \n \n \n\n\n \n González, D. R.; Solano, E. L.; and Calvo de Lara, J. R.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1462-1466, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ZeroPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952512,\n  author = {D. R. González and E. L. Solano and J. R. {Calvo de Lara}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Zero Phase speech representation for robust formant tracking},\n  year = {2014},\n  pages = {1462-1466},\n  abstract = {In this paper we present a speech representation based on the Linear Predictive Coding of the Zero Phase version of the signal (ZP-LPC) and its robustness in presence of additive noise for robust formant estimation. Two representations are proposed for using in the frequency candidate proposition stage of the formant tracking algorithm: 1) the roots of ZP-LPC and 2) the peaks of its group delay function (GDF). Both of them are studied and evaluated in noisy environments with a synthetic dataset to demonstrate their robustness. Proposed representations are then used in a formant tracking experiment with a speech database. A beam search algorithm is used for selecting the best candidates as formant. Results show that our method outperforms related techniques in noisy test configurations and is a good fit for use in applications that have to work in noisy environments.},\n  keywords = {linear predictive coding;search problems;speech coding;ZP-LPC;beam search algorithm;group delay function;linear predictive coding;robust formant tracking;zero phase speech representation;Speech;Robustness;Signal to noise ratio;Noise measurement;Speech recognition;Correlation;zero phase;linear predictive coding;group delay function;formant tracking},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917455.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we present a speech representation based on the Linear Predictive Coding of the Zero Phase version of the signal (ZP-LPC) and its robustness in presence of additive noise for robust formant estimation. Two representations are proposed for using in the frequency candidate proposition stage of the formant tracking algorithm: 1) the roots of ZP-LPC and 2) the peaks of its group delay function (GDF). Both of them are studied and evaluated in noisy environments with a synthetic dataset to demonstrate their robustness. Proposed representations are then used in a formant tracking experiment with a speech database. A beam search algorithm is used for selecting the best candidates as formant. Results show that our method outperforms related techniques in noisy test configurations and is a good fit for use in applications that have to work in noisy environments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gaussian Power flow Orientation Coefficients for noise-robust speech recognition.\n \n \n \n \n\n\n \n Gerazov, B.; and Ivanovski, Z.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1467-1471, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"GaussianPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952533,\n  author = {B. Gerazov and Z. Ivanovski},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Gaussian Power flow Orientation Coefficients for noise-robust speech recognition},\n  year = {2014},\n  pages = {1467-1471},\n  abstract = {Spectro-temporal features have shown a great promise in respect to improving the noise-robustness of Automatic Speech Recognition (ASR) systems. The common approach uses a bank of 2D Gabor filters to process the speech signal spectrogram and generate the output feature vector. This approach suffers from generating a large number of coefficients, thus necessitating the use of feature dimensionality reduction. The proposed Gaussian Power flow Orientation Coefficients (GPOCs) use an alternative approach in which only the largest coefficients output from a bank of 2D Gaussian kernels are used to describe the spectro-temporal patterns of power flow in the auditory spectrogram. Whilst reducing the size of the feature vectors, the algorithm was shown to outperform traditional feature extraction methods, even a reference spectro-temporal approach, for low SNRs. Its performance for high SNRs is comparable but inferior to traditional ASR frontends, while falling behind state-of-the-art algorithms in all noise scenarios.},\n  keywords = {channel bank filters;feature extraction;Gabor filters;Gaussian processes;speech recognition;Gaussian power flow orientation coefficients;noise-robust speech recognition;spectro-temporal features;automatic speech recognition systems;ASR system;2D Gabor filter bank;speech signal spectrogram processing;output feature vector generation;feature dimensionality reduction;GPOCs;2D Gaussian kernel bank;auditory spectrogram;feature vector size reduction;feature extraction methods;reference spectro-temporal approach;SNRs;ASR frontends;Spectrogram;Kernel;Feature extraction;Load flow;Speech;Training;Gabor filters;ASR;noise-robust;spectro-temporal;2D Gaussian;kernel},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921711.pdf},\n}\n\n
\n
\n\n\n
\n Spectro-temporal features have shown a great promise in respect to improving the noise-robustness of Automatic Speech Recognition (ASR) systems. The common approach uses a bank of 2D Gabor filters to process the speech signal spectrogram and generate the output feature vector. This approach suffers from generating a large number of coefficients, thus necessitating the use of feature dimensionality reduction. The proposed Gaussian Power flow Orientation Coefficients (GPOCs) use an alternative approach in which only the largest coefficients output from a bank of 2D Gaussian kernels are used to describe the spectro-temporal patterns of power flow in the auditory spectrogram. Whilst reducing the size of the feature vectors, the algorithm was shown to outperform traditional feature extraction methods, even a reference spectro-temporal approach, for low SNRs. Its performance for high SNRs is comparable but inferior to traditional ASR frontends, while falling behind state-of-the-art algorithms in all noise scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Wake-up-word spotting for mobile systems.\n \n \n \n \n\n\n \n Zehetner, A.; Hagmüller, M.; and Pernkopf, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1472-1476, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Wake-up-wordPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952534,\n  author = {A. Zehetner and M. Hagmüller and F. Pernkopf},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Wake-up-word spotting for mobile systems},\n  year = {2014},\n  pages = {1472-1476},\n  abstract = {Wake-up-word (WUW) spotting for mobile devices has attracted much attention recently. The aim is to detect the occurrence of very few or only one personalized keyword in a continuous potentially noisy audio signal. The application in personal mobile devices is to activate the device or to trigger an alarm in hazardous situations by voice. In this paper, we present a low-resource approach and results for WUW spotting based on template matching using dynamic time warping and other measures. The recognition of the WUW is performed by a combination of distance measures based on a simple background noise level classification. For evaluation we recorded a WUW spotting database with three different background noise levels, four different speaker distances to the microphone, and ten different speakers. It consists of 480 keywords embedded in continuous audio data.},\n  keywords = {audio signal processing;mobile computing;signal classification;signal denoising;speech processing;speech recognition;speech-based user interfaces;wake-up-word spotting;mobile systems;continuous potentially noisy audio signal;personal mobile devices;low-resource approach;WUW spotting database;template matching;dynamic time warping;distance measures;background noise level classification;keyword spotting;single-phrase recognition systems;Noise measurement;Speech;Noise;Databases;Hidden Markov models;Mel frequency cepstral coefficient;Speech recognition;Wake-up-Word spotting;keyword spotting;dynamic time warping},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923197.pdf},\n}\n\n
\n
\n\n\n
\n Wake-up-word (WUW) spotting for mobile devices has attracted much attention recently. The aim is to detect the occurrence of very few or only one personalized keyword in a continuous potentially noisy audio signal. The application in personal mobile devices is to activate the device or to trigger an alarm in hazardous situations by voice. In this paper, we present a low-resource approach and results for WUW spotting based on template matching using dynamic time warping and other measures. The recognition of the WUW is performed by a combination of distance measures based on a simple background noise level classification. For evaluation we recorded a WUW spotting database with three different background noise levels, four different speaker distances to the microphone, and ten different speakers. It consists of 480 keywords embedded in continuous audio data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient rule scoring for improved grapheme-based lexicons.\n \n \n \n \n\n\n \n Hartmann, W.; Lamel, L.; and Gauvain, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1477-1481, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952535,\n  author = {W. Hartmann and L. Lamel and J. Gauvain},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient rule scoring for improved grapheme-based lexicons},\n  year = {2014},\n  pages = {1477-1481},\n  abstract = {For many languages, an expert-defined phonetic lexicon may not exist. One popular alternative is the use of a grapheme-based lexicon. However, there may be a significant difference between the orthography and the pronunciation of the language. In our previous work, we proposed a statistical machine translation based approach to improving grapheme-based pronunciations. Without knowledge of true target pronunciations, a phrase table was created where each individual rule improved the likelihood of the training data when applied. The approach improved recognition accuracy, but required significant computational cost. In this work, we propose an improvement that increases the speed of the process by more than 80 times without decreasing recognition accuracy.},\n  keywords = {language translation;speech recognition;statistical analysis;rule scoring;improved grapheme-based lexicons;expert-defined phonetic lexicon;orthography;statistical machine translation based approach;language pronunciation;grapheme based pronunciations;automatic speech recognition;Hidden Markov models;Abstracts;Acoustics;automatic speech recognition;grapheme-based speech recognition;pronunciation learning},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924673.pdf},\n}\n\n
\n
\n\n\n
\n For many languages, an expert-defined phonetic lexicon may not exist. One popular alternative is the use of a grapheme-based lexicon. However, there may be a significant difference between the orthography and the pronunciation of the language. In our previous work, we proposed a statistical machine translation based approach to improving grapheme-based pronunciations. Without knowledge of true target pronunciations, a phrase table was created where each individual rule improved the likelihood of the training data when applied. The approach improved recognition accuracy, but required significant computational cost. In this work, we propose an improvement that increases the speed of the process by more than 80 times without decreasing recognition accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Missing feature reconstruction methods for robust speaker identification.\n \n \n \n \n\n\n \n Zhang, X.; Zhang, H.; and Gao, G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1482-1486, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"MissingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952536,\n  author = {X. Zhang and H. Zhang and G. Gao},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Missing feature reconstruction methods for robust speaker identification},\n  year = {2014},\n  pages = {1482-1486},\n  abstract = {In this study, we propose a reconstruction method to restore the degraded features for robust speaker identification. The proposed method is based on a hybrid generative model which consists of deep belief network (DBN) and restricted Boltzmann machine (RBM). Specifically, the noisy speech is firstly decomposed into time-frequency (T-F) representations. Then ideal binary mask (IBM) is computed to indicate each T-F point as reliable or unreliable. We reconstruct the unreliable ones by the proposed model iteratively. Finally, reconstructed feature is utilized to conventional speaker identification system. Experiments demonstrate that the proposed method achieves significant performance improvements over previous missing feature techniques under a wide range of signal-to-noise ratios.},\n  keywords = {signal reconstruction;signal representation;signal restoration;speaker recognition;time-frequency analysis;missing feature reconstruction methods;robust speaker identification system;deep belief network;hybrid generative model;DBN;restricted Boltzmann machine;RBM;noisy speech;time-frequency representations;T-F representations;ideal binary mask;IBM;T-F point;signal-to-noise ratios;Robustness;Abstracts;Computational modeling;Adaptation models;Data models;Production facilities;Smoothing methods;Robust speaker identification;Missing feature techniques;Restricted Boltzmann machine;Deep belief network},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925057.pdf},\n}\n\n
\n
\n\n\n
\n In this study, we propose a reconstruction method to restore the degraded features for robust speaker identification. The proposed method is based on a hybrid generative model which consists of deep belief network (DBN) and restricted Boltzmann machine (RBM). Specifically, the noisy speech is firstly decomposed into time-frequency (T-F) representations. Then ideal binary mask (IBM) is computed to indicate each T-F point as reliable or unreliable. We reconstruct the unreliable ones by the proposed model iteratively. Finally, reconstructed feature is utilized to conventional speaker identification system. Experiments demonstrate that the proposed method achieves significant performance improvements over previous missing feature techniques under a wide range of signal-to-noise ratios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Combining temporal and spectral information for Query-by-Example Spoken Term Detection.\n \n \n \n \n\n\n \n Gracia, C.; Anguera, X.; and Binefa, X.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1487-1491, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CombiningPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952537,\n  author = {C. Gracia and X. Anguera and X. Binefa},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Combining temporal and spectral information for Query-by-Example Spoken Term Detection},\n  year = {2014},\n  pages = {1487-1491},\n  abstract = {We present a system for Query-by-Example Spoken Term Detection on zero-resource languages. The system compares speech patterns by representing the signal using two different acoustic models, a Spectral Acoustic (SA) model covering the spectral characteristics of the signal, and a Temporal Acoustic (TA) model covering the temporal evolution of the speech signal. Given a query and a utterance to be compared, first we compute their posterior probabilities according to each of the two models, compute similarity matrices for each model and combine these into a single enhanced matrix. Subsequence-Dynamic Time Warping (S-DTW) algorithm is used to find optimal subsequence alignment paths on this final matrix. Our experiments on data from the 2013 Spoken Web Search (SWS) task at Mediaeval benchmark evaluation show that this approach provides state of the art results and significantly improves both the single model strategies and the standard metric baselines.},\n  keywords = {audio databases;learning (artificial intelligence);pattern matching;query processing;speech processing;query-by-example spoken term detection;optimal subsequence alignment paths;subsequence dynamic time warping algorithm;speech signal;temporal acoustic model;spectral acoustic model;speech patterns;zero resource languages;spectral information;temporal information;Acoustics;Speech;Vectors;Data models;Computational modeling;Hidden Markov models;Adaptation models;Query by example;zero resources languages;unsupervised learning;long temporal context},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925447.pdf},\n}\n\n
\n
\n\n\n
\n We present a system for Query-by-Example Spoken Term Detection on zero-resource languages. The system compares speech patterns by representing the signal using two different acoustic models, a Spectral Acoustic (SA) model covering the spectral characteristics of the signal, and a Temporal Acoustic (TA) model covering the temporal evolution of the speech signal. Given a query and a utterance to be compared, first we compute their posterior probabilities according to each of the two models, compute similarity matrices for each model and combine these into a single enhanced matrix. Subsequence-Dynamic Time Warping (S-DTW) algorithm is used to find optimal subsequence alignment paths on this final matrix. Our experiments on data from the 2013 Spoken Web Search (SWS) task at Mediaeval benchmark evaluation show that this approach provides state of the art results and significantly improves both the single model strategies and the standard metric baselines.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analysis of emotional speech using an adaptive sinusoidal model.\n \n \n \n \n\n\n \n Kafentzis, G. P.; Yakoumaki, T.; Mouchtaris, A.; and Stylianou, Y.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1492-1496, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AnalysisPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952538,\n  author = {G. P. Kafentzis and T. Yakoumaki and A. Mouchtaris and Y. Stylianou},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Analysis of emotional speech using an adaptive sinusoidal model},\n  year = {2014},\n  pages = {1492-1496},\n  abstract = {Processing of emotional (or expressive) speech has gained attention over recent years in the speech community due to its numerous applications. In this paper, an adaptive sinusoidal model (aSM), dubbed extended adaptive Quasi-Harmonic Model - eaQHM, is employed to analyze emotional speech in accurate, robust, continuous, timevarying parameters (amplitude, frequency, and phase). It is shown that these parameters can adequately and accurately represent emotional speech content. Using a well known database of narrowband expressive speech (SUSAS) we show that very high Signal-to-Reconstruction-Error Ratio (SRER) values can be obtained, compared to the standard sinusoidal model (SM). Formal listening tests on a smaller wideband speech database show that the eaQHM outperforms SM from a perceptual resynthesis quality point of view. Finally, preliminary emotion classification tests show that the parameters obtained from the adaptive model lead to a higher classification score, compared to the standard SM parameters.},\n  keywords = {signal reconstruction;speech processing;emotional speech analysis;adaptive sinusoidal model;emotional speech processing;expressive speech processing;speech community;SUSAS;signal-to-reconstruction-error ratio;wideband speech database;emotion classification;extended adaptive quasiharmonic model;Speech;Databases;Adaptation models;Hidden Markov models;Analytical models;Speech recognition;Stress;Extended adaptive quasi-harmonic model;Speech analysis;Emotional speech;Sinusoidal modelling;Emotion classification},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925609.pdf},\n}\n\n
\n
\n\n\n
\n Processing of emotional (or expressive) speech has gained attention over recent years in the speech community due to its numerous applications. In this paper, an adaptive sinusoidal model (aSM), dubbed extended adaptive Quasi-Harmonic Model - eaQHM, is employed to analyze emotional speech in accurate, robust, continuous, timevarying parameters (amplitude, frequency, and phase). It is shown that these parameters can adequately and accurately represent emotional speech content. Using a well known database of narrowband expressive speech (SUSAS) we show that very high Signal-to-Reconstruction-Error Ratio (SRER) values can be obtained, compared to the standard sinusoidal model (SM). Formal listening tests on a smaller wideband speech database show that the eaQHM outperforms SM from a perceptual resynthesis quality point of view. Finally, preliminary emotion classification tests show that the parameters obtained from the adaptive model lead to a higher classification score, compared to the standard SM parameters.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Interference detection in GNSS signals using the Gaussianity criterion.\n \n \n \n \n\n\n \n Nunes, F. D.; and Sousa, F. M. G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1497-1501, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"InterferencePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952539,\n  author = {F. D. Nunes and F. M. G. Sousa},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Interference detection in GNSS signals using the Gaussianity criterion},\n  year = {2014},\n  pages = {1497-1501},\n  abstract = {We assess the performance of Gaussianity tests, namely the Anscombe-Glynn, Lilliefors, Cramér-von Mises, and Giannakis-Tsatsanis (G-T), with the purpose of detecting narrowband and wideband interference in GNSS signals. Simulations have shown that the G-T test outperforms the others being suitable as a benchmark for comparison with different types of interference detection algorithms.},\n  keywords = {Gaussian processes;interference (signal);satellite navigation;signal detection;wideband interference detection;narrowband interference detection;Giannakis Tsatsanis;Cramer-von Mises;Lilliefors;Anscombe Glynn;Gaussianity criterion;GNSS signals;Interference;Global Positioning System;Chirp;Jamming;Random variables;Wideband;Satellites;Gaussianity tests;interference detection;kurtosis;cumulants},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569919039.pdf},\n}\n\n
\n
\n\n\n
\n We assess the performance of Gaussianity tests, namely the Anscombe-Glynn, Lilliefors, Cramér-von Mises, and Giannakis-Tsatsanis (G-T), with the purpose of detecting narrowband and wideband interference in GNSS signals. Simulations have shown that the G-T test outperforms the others being suitable as a benchmark for comparison with different types of interference detection algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Array-broadband effects on direct geolocation algorithm.\n \n \n \n \n\n\n \n Delestre, C.; Ferréol, A.; and Larzabal, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1502-1506, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Array-broadbandPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952540,\n  author = {C. Delestre and A. Ferréol and P. Larzabal},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Array-broadband effects on direct geolocation algorithm},\n  year = {2014},\n  pages = {1502-1506},\n  abstract = {Recent works have introduced powerful 1-step geolocation methods in comparison with traditional, and suboptimal, 2-steps methods. As these 1-step methods directly and simultaneously work on the observations of the whole array, there is now an important issue concerning the possible array-broadband effect. To counteract that effect, the recent methods introduce an imperfect narrowband decomposition, by the way of a filter bank or, equivalently, by a structured multidimensional modelization. The purpose of this work is to study the residual array-broadband effect on the 1-step algorithms performances. The study will compare two 1-step methods by the way of the bias and the ambiguity problem, giving some tools for operational design.},\n  keywords = {array signal processing;channel bank filters;geophysical signal processing;direct geolocation algorithm;1-step geolocation methods;suboptimal 2-steps methods;imperfect narrowband decomposition;filter bank;structured multidimensional modelization;residual array-broadband effect;ambiguity problem;Narrowband;Broadband communication;Covariance matrices;Niobium;Geology;Vectors;Geolocation;Narrowband;Broadband;Parameter bias;Error on covariance matrix;LOST;DPD},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925581.pdf},\n}\n\n
\n
\n\n\n
\n Recent works have introduced powerful 1-step geolocation methods in comparison with traditional, and suboptimal, 2-steps methods. As these 1-step methods directly and simultaneously work on the observations of the whole array, there is now an important issue concerning the possible array-broadband effect. To counteract that effect, the recent methods introduce an imperfect narrowband decomposition, by the way of a filter bank or, equivalently, by a structured multidimensional modelization. The purpose of this work is to study the residual array-broadband effect on the 1-step algorithms performances. The study will compare two 1-step methods by the way of the bias and the ambiguity problem, giving some tools for operational design.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Iterative grid search for RSS-based emitter localization.\n \n \n \n \n\n\n \n Üreten, S.; Yongaçoğlu, A.; and Petriu, E.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1507-1511, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"IterativePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952541,\n  author = {S. Üreten and A. Yongaçoğlu and E. Petriu},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Iterative grid search for RSS-based emitter localization},\n  year = {2014},\n  pages = {1507-1511},\n  abstract = {In this paper, we present a reduced complexity iterative grid-search technique for locating non-cooperating primary emitters in cognitive radio networks using received signal strength (RSS) measurements. The technique is based on dividing the search space into a smaller number of candidate subregions, selecting the best candidate that minimizes a cost function and repeating the process iteratively over the selections. We evaluate the performance of the proposed algorithm in independent shadowing scenarios and show that the performance closely approaches to that of the full search, particularly at small shadowing spread values with significantly reduced computational complexity. We also look at the performance of our algorithm when the initial search space is specified based on two different data-aided approaches using sensor measurements. Our simulation results show that the data-aided initialization schemes do not provide performance improvement over blind initialization.},\n  keywords = {cognitive radio;computational complexity;iterative methods;radio direction-finding;radiotelemetry;search problems;RSS-based emitter localization;reduced complexity iterative grid-search technique;non-cooperating primary emitter location;cognitive radio network;received signal strength measurement;RSS measurement;cost function minimization;independent shadowing algorithm;reduced computational complexity;data-aided approach;sensor measurement;data-aided initialization scheme;blind initialization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924543.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present a reduced complexity iterative grid-search technique for locating non-cooperating primary emitters in cognitive radio networks using received signal strength (RSS) measurements. The technique is based on dividing the search space into a smaller number of candidate subregions, selecting the best candidate that minimizes a cost function and repeating the process iteratively over the selections. We evaluate the performance of the proposed algorithm in independent shadowing scenarios and show that the performance closely approaches to that of the full search, particularly at small shadowing spread values with significantly reduced computational complexity. We also look at the performance of our algorithm when the initial search space is specified based on two different data-aided approaches using sensor measurements. Our simulation results show that the data-aided initialization schemes do not provide performance improvement over blind initialization.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distance-based tuning of the EKF for indoor positioning in WSNs.\n \n \n \n \n\n\n \n Correa, A.; Barceló, M.; Morell, A.; and Vicario, J. L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1512-1516, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Distance-basedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952542,\n  author = {A. Correa and M. Barceló and A. Morell and J. L. Vicario},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Distance-based tuning of the EKF for indoor positioning in WSNs},\n  year = {2014},\n  pages = {1512-1516},\n  abstract = {This work proposes a filtering method for indoor positioning and tracking applications which combines position, speed and heading measurements with the aim of achieving more accurate position estimates both in the short and the long term. We combine all this data using the well-known Extended Kalman Filter (EKF). The particularity in our proposal is that the EKF is configured using the designed statistical covariance matrix tuning method (SCMT), which is based on the the statistical characteristics of the position measurements. Thanks to the SCMT, the EKF is able to efficiently cope with measurements that have different degrees of uncertainty and, therefore, it achieves high accuracy also in the long-term. The system has been validated in a real environment and the results show a reduction in the positioning error of more than 48% when compared to a regular EKF in the tested scenarios.},\n  keywords = {covariance matrices;filtering theory;Kalman filters;nonlinear filters;statistical analysis;wireless sensor networks;distance-based tuning;EKF;indoor positioning;WSN;filtering method;extended Kalman filter;statistical covariance matrix tuning method;SCMT;Covariance matrices;Position measurement;Estimation;Accuracy;Wireless sensor networks;Noise measurement;Mobile nodes},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924991.pdf},\n}\n\n
\n
\n\n\n
\n This work proposes a filtering method for indoor positioning and tracking applications which combines position, speed and heading measurements with the aim of achieving more accurate position estimates both in the short and the long term. We combine all this data using the well-known Extended Kalman Filter (EKF). The particularity in our proposal is that the EKF is configured using the designed statistical covariance matrix tuning method (SCMT), which is based on the the statistical characteristics of the position measurements. Thanks to the SCMT, the EKF is able to efficiently cope with measurements that have different degrees of uncertainty and, therefore, it achieves high accuracy also in the long-term. The system has been validated in a real environment and the results show a reduction in the positioning error of more than 48% when compared to a regular EKF in the tested scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improved pseudolite navigation using C/N0 measurements.\n \n \n \n \n\n\n \n Borio, D.; and Gioia, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1517-1521, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952543,\n  author = {D. Borio and C. Gioia},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Improved pseudolite navigation using C/N0 measurements},\n  year = {2014},\n  pages = {1517-1521},\n  abstract = {The problem of indoor navigation using pseudolites is investigated and two different approaches, employing synchronous and asynchronous technologies, are considered. It is shown that synchronous pseudolite systems, commonly considered more accurate, seem to be unsuitable for deep indoor operations: in complex propagation environments, the synchronization required for metre level navigation is difficult to achieve and a different solution should be adopted. The potential of asynchronous pseudolite systems is demonstrated and indoor navigation with metre level accuracy is obtained using C/N0 measurements. In particular, the spectral characteristics of C/N0 measurements are investigated and used to design a pre-filtering stage which, in turn, is employed to remove high-frequency noise. Pre-filtering significantly improves the navigation performance in harsh indoor environments.},\n  keywords = {filtering theory;indoor communication;satellite navigation;synchronisation;improved pseudolite navigation;C/N0 measurements;indoor navigation;asynchronous technology;synchronous technology;synchronous pseudolite systems;synchronization;deep indoor operations;complex propagation environments;metre level navigation;asynchronous pseudolite systems;spectral characteristics;pre-filtering stage design;high-frequency noise removal;harsh indoor environments;global navigation satellite system;GNSS;Global Positioning System;Receivers;Synchronization;Noise;Accuracy;Global Navigation Satellite System (GNSS);Indoor Navigation;Pseudolites;Receiver Signal Strength Indicator (RSSI)},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921621.pdf},\n}\n\n
\n
\n\n\n
\n The problem of indoor navigation using pseudolites is investigated and two different approaches, employing synchronous and asynchronous technologies, are considered. It is shown that synchronous pseudolite systems, commonly considered more accurate, seem to be unsuitable for deep indoor operations: in complex propagation environments, the synchronization required for metre level navigation is difficult to achieve and a different solution should be adopted. The potential of asynchronous pseudolite systems is demonstrated and indoor navigation with metre level accuracy is obtained using C/N0 measurements. In particular, the spectral characteristics of C/N0 measurements are investigated and used to design a pre-filtering stage which, in turn, is employed to remove high-frequency noise. Pre-filtering significantly improves the navigation performance in harsh indoor environments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A simplified QRS decision stage based on the DFT coefficients.\n \n \n \n \n\n\n \n Gorriz, J. M.; Ramírez, J.; Olivares, A.; Ilián, I. A.; Salas, D.; Puntonet, C. G.; and Padilla, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1522-1526, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952544,\n  author = {J. M. Gorriz and J. Ramírez and A. Olivares and I. A. Ilián and D. Salas and C. G. Puntonet and P. Padilla},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A simplified QRS decision stage based on the DFT coefficients},\n  year = {2014},\n  pages = {1522-1526},\n  abstract = {This paper shows an adaptive statistical test for QRS detection of ECG signals. The method is based on a M-ary generalized likelihood ratio test (LRT) defined over a multiple observation window in the Fourier domain. The previous algorithms based on maximum a posteriori (MAP) estimation result in high signal model complexity which i) makes them computationally unfeasible or not intended for real time applications such as intensive care monitoring and (ii) in which the parameter selection conditions the overall performance. A simplified model based on the independent Gaussian properties of the DFT coefficients is proposed. This model allows to define a simplified MAP probability function and to define an adaptive MAP statistical test in which a global hypothesis is defined on particular hypotheses of the multiple observation window. Moreover, the observation interval is modeled as a discontinuous transmission discrete-time stochastic process avoiding the inclusion of parameters that constraint the morphology of the QRS complexes.},\n  keywords = {electrocardiography;Fourier series;Gaussian processes;medical signal processing;simplified QRS decision stage;DFT coefficients;adaptive statistical test;QRS detection;ECG signals;M-ary generalized likelihood ratio test;LRT;Fourier domain;maximum a posteriori;MAP estimation;signal model complexity;parameter selection conditions;independent Gaussian properties;simplified MAP probability function;discrete-time stochastic process;Vectors;Abstracts;Computational modeling;Adaptation models;IIR filters;Estimation;Signal resolution;Electrocardiogram (ECG);QRS detection;M-ary Likelihood Ratio Test},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569909821.pdf},\n}\n\n
\n
\n\n\n
\n This paper shows an adaptive statistical test for QRS detection of ECG signals. The method is based on a M-ary generalized likelihood ratio test (LRT) defined over a multiple observation window in the Fourier domain. The previous algorithms based on maximum a posteriori (MAP) estimation result in high signal model complexity which i) makes them computationally unfeasible or not intended for real time applications such as intensive care monitoring and (ii) in which the parameter selection conditions the overall performance. A simplified model based on the independent Gaussian properties of the DFT coefficients is proposed. This model allows to define a simplified MAP probability function and to define an adaptive MAP statistical test in which a global hypothesis is defined on particular hypotheses of the multiple observation window. Moreover, the observation interval is modeled as a discontinuous transmission discrete-time stochastic process avoiding the inclusion of parameters that constraint the morphology of the QRS complexes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Heart failure discrimination using matching pursuit decomposition.\n \n \n \n \n\n\n \n Lucena, F.; Yoshinori, T.; Barros, A. K.; and Ohnishi, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1527-1531, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"HeartPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952545,\n  author = {F. Lucena and T. Yoshinori and A. K. Barros and N. Ohnishi},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Heart failure discrimination using matching pursuit decomposition},\n  year = {2014},\n  pages = {1527-1531},\n  abstract = {Congestive heart failure (CHF) is a cardiac disease associated with the decreases in cardiac output. As a measure to predict sudden death, we propose a framework for discriminating CHF subjects from normal sinus rhythm (NSR). This framework relies on matching pursuit decomposition to derive a set of features, which are tested in a hybrid genetic algorithm and k-nearest neighbor classifier to select the best feature subset. The performance of the proposed framework is analyzed using both Fantasia and CHF database from Physionet archives which are, respectively, composed of 40 NSR volunteers and 29 CHF subjects. The proposed methodology reaches an overall accuracy of 100% when the features are normalized and the feature subset selection strategy is applied. We believe that our method can be extremely useful to the clinician in primary health care as a support tool to discriminate healthy from CHF subjects.},\n  keywords = {cardiology;diseases;genetic algorithms;health care;iterative methods;time-frequency analysis;heart failure discrimination;matching pursuit decomposition;congestive heart failure;cardiac disease;sudden death prediction;normal sinus rhythm;hybrid genetic algorithm;k-nearest neighbor classifier;Fantasia database;CHF database;Physionet archives;NSR volunteers;CHF subjects;primary health care;Heart rate variability;Genetic algorithms;Matching pursuit algorithms;Time-frequency analysis;Resonant frequency;Accuracy},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924527.pdf},\n}\n\n
\n
\n\n\n
\n Congestive heart failure (CHF) is a cardiac disease associated with the decreases in cardiac output. As a measure to predict sudden death, we propose a framework for discriminating CHF subjects from normal sinus rhythm (NSR). This framework relies on matching pursuit decomposition to derive a set of features, which are tested in a hybrid genetic algorithm and k-nearest neighbor classifier to select the best feature subset. The performance of the proposed framework is analyzed using both Fantasia and CHF database from Physionet archives which are, respectively, composed of 40 NSR volunteers and 29 CHF subjects. The proposed methodology reaches an overall accuracy of 100% when the features are normalized and the feature subset selection strategy is applied. We believe that our method can be extremely useful to the clinician in primary health care as a support tool to discriminate healthy from CHF subjects.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online seizure detection in adults with temporal lobe epilepsy using single-lead ECG.\n \n \n \n \n\n\n \n De Cooman, T.; Carrette, E.; Boon, P.; Meurs, A.; and Van Huffel, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1532-1536, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OnlinePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952546,\n  author = {T. {De Cooman} and E. Carrette and P. Boon and A. Meurs and S. {Van Huffel}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Online seizure detection in adults with temporal lobe epilepsy using single-lead ECG},\n  year = {2014},\n  pages = {1532-1536},\n  abstract = {In this paper, a patient-independent algorithm for online epileptic seizure detection using only single-lead ECG is proposed. It is tested on 300h of data from adults with temporal lobe epilepsy. The features are extracted from a period of linear increase of the heart rate, which typically occurs in this kind of patients. These features are classified by two different classifiers: linear support vector machine (LSVM) and linear discriminant analysis (LDA). The best performance is found for LDA with a sensitivity of 80.0%, a PPV of 40.5% and an average detection delay of 31.5s, which are satisfactory results for online usage in monitoring or warning systems.},\n  keywords = {diseases;electrocardiography;feature extraction;patient monitoring;patient treatment;support vector machines;online epileptic seizure detection;adults;temporal lobe epilepsy;single-lead ECG;patient-independent algorithm;feature extraction;heart rate;patients;linear support vector machine;LSVM;linear discriminant analysis;LDA;monitoring system;warning system;Heart rate;Electrocardiography;Feature extraction;Epilepsy;Delays;Training;Sensitivity;Temporal lobe epilepsy;online seizure detection;ECG;LSVM;LDA},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924553.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a patient-independent algorithm for online epileptic seizure detection using only single-lead ECG is proposed. It is tested on 300h of data from adults with temporal lobe epilepsy. The features are extracted from a period of linear increase of the heart rate, which typically occurs in this kind of patients. These features are classified by two different classifiers: linear support vector machine (LSVM) and linear discriminant analysis (LDA). The best performance is found for LDA with a sensitivity of 80.0%, a PPV of 40.5% and an average detection delay of 31.5s, which are satisfactory results for online usage in monitoring or warning systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Grouped sparsity algorithm for multichannel intracardiac ECG synchronization.\n \n \n \n \n\n\n \n Trigano, T.; Kolesnikov, V.; Luengo, D.; and Artés-Rodríguez, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1537-1541, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"GroupedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952547,\n  author = {T. Trigano and V. Kolesnikov and D. Luengo and A. Artés-Rodríguez},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Grouped sparsity algorithm for multichannel intracardiac ECG synchronization},\n  year = {2014},\n  pages = {1537-1541},\n  abstract = {In this paper, a new method is presented to ensure automatic synchronization of intracardiac ECG data, yielding a three-stage algorithm. We first compute a robust estimate of the derivative of the data to remove low-frequency perturbations. Then we provide a grouped-sparse representation of the data, by means of the Group LASSO, to ensure that all the electrical spikes are simultaneously detected. Finally, a post-processing step, based on a variance analysis, is performed to discard false alarms. Preliminary results on real data for sinus rhythm and atrial fibrillation show the potential of this approach.},\n  keywords = {electrocardiography;medical signal processing;regression analysis;signal representation;synchronisation;grouped sparse representation algorithm;multichannel intracardiac ECG synchronization;automatic synchronization;three-stage algorithm;low-frequency perturbations;group LASSO;electrical spikes;post-processing step;variance analysis;sinus rhythm;atrial fibrillation;Electrocardiography;Dictionaries;Robustness;Estimation;Shape;Noise;Rhythm;electrocardiography;multi-channel signal processing;sparse inference;group LASSO},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925111.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a new method is presented to ensure automatic synchronization of intracardiac ECG data, yielding a three-stage algorithm. We first compute a robust estimate of the derivative of the data to remove low-frequency perturbations. Then we provide a grouped-sparse representation of the data, by means of the Group LASSO, to ensure that all the electrical spikes are simultaneously detected. Finally, a post-processing step, based on a variance analysis, is performed to discard false alarms. Preliminary results on real data for sinus rhythm and atrial fibrillation show the potential of this approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic classification of heartbeats.\n \n \n \n \n\n\n \n Basil, T.; and Lakshminarayan, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1542-1546, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952548,\n  author = {T. Basil and C. Lakshminarayan},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic classification of heartbeats},\n  year = {2014},\n  pages = {1542-1546},\n  abstract = {We report improvement in the detection of a class of heart arrhythmias based on electrocardiogram signals (ECG). The detection is performed using a 4 dimensional feature vector obtained by applying an iterative feature selection method used in conjunction with artificial neural networks. The feature set includes the pre-RR interval, which is a primary measure that cardiologists use in a clinical setting. A transformation applied to the pre-RR interval reduced the false positive rate. Our solution as opposed to existing literature does not rely on high-dimensional features such as wavelets, signal amplitudes which do not have direct relationship to heart function and difficult to interpret. Also we avoid obtaining patient specific labeled recordings. Furthermore, we propose semi-parametric classifiers as opposed to restrictive parametric linear discriminant analysis and its variants, which are a mainstay in ECG classification. Extensive experiments from the MIT-BIH databases demonstrate superior performance by our methods.},\n  keywords = {electrocardiography;feature selection;iterative methods;medical disorders;medical signal detection;medical signal processing;neural nets;signal classification;automatic heartbeat classification;heart arrhythmias;electrocardiogram signals;ECG;4 dimensional feature vector;iterative feature selection method;artificial neural networks;pre-RR interval;false positive rate;semiparametric classifiers;MIT-BIH databases;Heart beat;Electrocardiography;Artificial neural networks;Feature extraction;Morphology;Support vector machine classification;ECG;Classification;False positives;Discriminant analysis;Artificial neural networks},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925387.pdf},\n}\n\n
\n
\n\n\n
\n We report improvement in the detection of a class of heart arrhythmias based on electrocardiogram signals (ECG). The detection is performed using a 4 dimensional feature vector obtained by applying an iterative feature selection method used in conjunction with artificial neural networks. The feature set includes the pre-RR interval, which is a primary measure that cardiologists use in a clinical setting. A transformation applied to the pre-RR interval reduced the false positive rate. Our solution as opposed to existing literature does not rely on high-dimensional features such as wavelets, signal amplitudes which do not have direct relationship to heart function and difficult to interpret. Also we avoid obtaining patient specific labeled recordings. Furthermore, we propose semi-parametric classifiers as opposed to restrictive parametric linear discriminant analysis and its variants, which are a mainstay in ECG classification. Extensive experiments from the MIT-BIH databases demonstrate superior performance by our methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparison of different representations based on nonlinear features for music genre classification.\n \n \n \n \n\n\n \n Zlatintsi, A.; and Maragos, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1547-1551, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ComparisonPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952549,\n  author = {A. Zlatintsi and P. Maragos},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Comparison of different representations based on nonlinear features for music genre classification},\n  year = {2014},\n  pages = {1547-1551},\n  abstract = {In this paper, we examine the descriptiveness and recognition properties of different feature representations for the analysis of musical signals, aiming in the exploration of their microand macro-structures, for the task of music genre classification. We explore nonlinear methods, such as the AM-FM model and ideas from fractal theory, so as to model the time-varying harmonic structure of musical signals and the geometrical complexity of the music waveform. The different feature representations' efficacy is compared regarding their recognition properties for the specific task. The proposed features are evaluated against and in combination with Mel frequency cepstral coefficients (MFCC), using both static and dynamic classifiers, accomplishing an error reduction of 28%, illustrating that they can capture important aspects of music.},\n  keywords = {acoustic signal processing;music;signal classification;signal representation;nonlinear features;music genre classification;recognition properties;feature representation;musical signals;nonlinear method;AM-FM model;fractal theory;time-varying harmonic structure;Mel frequency cepstral coefficients;MFCC;static classifier;dynamic classifier;error reduction;Multiple signal classification;Hidden Markov models;Feature extraction;Accuracy;Fractals;Principal component analysis;Modulation;Music genre classification;AM-FM model;energy separation algorithm;fractals;Bag-of-Words},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925539.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we examine the descriptiveness and recognition properties of different feature representations for the analysis of musical signals, aiming in the exploration of their microand macro-structures, for the task of music genre classification. We explore nonlinear methods, such as the AM-FM model and ideas from fractal theory, so as to model the time-varying harmonic structure of musical signals and the geometrical complexity of the music waveform. The different feature representations' efficacy is compared regarding their recognition properties for the specific task. The proposed features are evaluated against and in combination with Mel frequency cepstral coefficients (MFCC), using both static and dynamic classifiers, accomplishing an error reduction of 28%, illustrating that they can capture important aspects of music.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Emotion classification of speech using modulation features.\n \n \n \n \n\n\n \n Chaspari, T.; Dimitriadis, D.; and Maragos, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1552-1556, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EmotionPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952550,\n  author = {T. Chaspari and D. Dimitriadis and P. Maragos},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Emotion classification of speech using modulation features},\n  year = {2014},\n  pages = {1552-1556},\n  abstract = {Automatic classification of a speaker's affective state is one of the major challenges in signal processing community, since it can improve Human-Computer interaction and give insights into the nature of emotions from psychology perspective. The amplitude and frequency control of sound production influences strongly the affective voice content. In this paper, we take advantage of the inherent speech modulations and propose the use of instant amplitude- and frequency-derived features for efficient emotion recognition. Our results indicate that these features can further increase the performance of the widely-used spectral-prosodic information, achieving improvements on two emotional databases, the Berlin Database of Emotional Speech and the recently collected Athens Emotional States Inventory.},\n  keywords = {emotion recognition;human computer interaction;speaker recognition;speech processing;speech emotion classification;modulation features;speaker classification;signal processing;human-computer interaction;amplitude control;frequency control;sound production;emotion recognition;Berlin database of emotional speech;Athens emotional states inventory;Speech;Speech recognition;Frequency modulation;Emotion recognition;Databases;Feature extraction;Emotion classification;AM-FM features;speech analysis;human-computer interaction},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924745.pdf},\n}\n\n
\n
\n\n\n
\n Automatic classification of a speaker's affective state is one of the major challenges in signal processing community, since it can improve Human-Computer interaction and give insights into the nature of emotions from psychology perspective. The amplitude and frequency control of sound production influences strongly the affective voice content. In this paper, we take advantage of the inherent speech modulations and propose the use of instant amplitude- and frequency-derived features for efficient emotion recognition. Our results indicate that these features can further increase the performance of the widely-used spectral-prosodic information, achieving improvements on two emotional databases, the Berlin Database of Emotional Speech and the recently collected Athens Emotional States Inventory.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust pitch estimation using an optimal filter on frequency estimates.\n \n \n \n \n\n\n \n Karimian-Azari, S.; Jensen, J. R.; and Christensen, M. G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1557-1561, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952551,\n  author = {S. Karimian-Azari and J. R. Jensen and M. G. Christensen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Robust pitch estimation using an optimal filter on frequency estimates},\n  year = {2014},\n  pages = {1557-1561},\n  abstract = {In many scenarios, a periodic signal of interest is often contaminated by different types of noise, that may render many existing pitch estimation methods suboptimal, e.g., due to an incorrect white Gaussian noise assumption. In this paper, a method is established to estimate the pitch of such signals from unconstrained frequency estimates (UFEs). A minimum variance distortionless response (MVDR) method is proposed as an optimal solution to minimize the variance of UFEs considering the constraint of integer harmonics. The MVDR filter is designed based on noise statistics making it robust against different noise situations. The simulation results confirm that the proposed MVDR method outperforms the state-of-the-art weighted least squares (WLS) pitch estimator in colored noise and has robust pitch estimates against missing harmonics in some time-frames.},\n  keywords = {filtering theory;frequency estimation;least squares approximations;robust pitch estimation;optimal filter;frequency estimates;suboptimal pitch estimation method;incorrect white Gaussian noise assumption;signal pitch estimation;unconstrained frequency estimates;minimum variance distortionless response method;MVDR method;UFE variance minimization;integer harmonics;MVDR filter;noise statistics;weighted least square pitch estimator;WLS pitch estimator;colored noise;Harmonic analysis;Frequency estimation;Maximum likelihood estimation;Gaussian noise;Colored noise;Audio signal;harmonic model;pitch estimation;minimum variance distortionless response (MVDR);maximum likelihood (ML)},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925075.pdf},\n}\n\n
\n
\n\n\n
\n In many scenarios, a periodic signal of interest is often contaminated by different types of noise, that may render many existing pitch estimation methods suboptimal, e.g., due to an incorrect white Gaussian noise assumption. In this paper, a method is established to estimate the pitch of such signals from unconstrained frequency estimates (UFEs). A minimum variance distortionless response (MVDR) method is proposed as an optimal solution to minimize the variance of UFEs considering the constraint of integer harmonics. The MVDR filter is designed based on noise statistics making it robust against different noise situations. The simulation results confirm that the proposed MVDR method outperforms the state-of-the-art weighted least squares (WLS) pitch estimator in colored noise and has robust pitch estimates against missing harmonics in some time-frames.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the modeling of natural vocal emotion expressions through binary key.\n \n \n \n \n\n\n \n Luque, J.; and Anguera, X.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1562-1566, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952552,\n  author = {J. Luque and X. Anguera},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {On the modeling of natural vocal emotion expressions through binary key},\n  year = {2014},\n  pages = {1562-1566},\n  abstract = {This work presents a novel method to estimate natural expressed emotions in speech through binary acoustic modeling. Standard acoustic features are mapped to a binary value representation and a support vector regression model is used to correlate them with the three-continuous emotional dimensions. Three different sets of speech features, two based on spectral parameters and one on prosody are compared on the VAM corpus, a set of spontaneous dialogues from a German TV talk-show. The regression analysis, in terms of correlation coefficient and mean absolute error, show that the binary key modeling is able to successfully capture speaker emotion characteristics. The proposed algorithm obtains comparable results to those reported on the literature while it relies on a much smaller set of acoustic descriptors. Furthermore, we also report on preliminary results based on the combination of the binary models, which brings further performance improvements.},\n  keywords = {acoustic signal processing;emotion recognition;regression analysis;speech recognition;support vector machines;natural vocal emotion expression modelling;binary acoustic modeling;standard acoustic feature mapping;binary value representation;support vector regression model;three-continuous emotional dimensions;speech features;spectral parameters;VAM corpus;spontaneous dialogues;German TV talk-show;correlation coefficient;mean absolute error;binary key modeling;speaker emotion characteristics;acoustic descriptors;Speech;Acoustics;Feature extraction;Vectors;Emotion recognition;Speech recognition;Training;Emotion modeling;binary fingerprint;VAM corpus;dimensional emotions},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925531.pdf},\n}\n\n
\n
\n\n\n
\n This work presents a novel method to estimate natural expressed emotions in speech through binary acoustic modeling. Standard acoustic features are mapped to a binary value representation and a support vector regression model is used to correlate them with the three-continuous emotional dimensions. Three different sets of speech features, two based on spectral parameters and one on prosody are compared on the VAM corpus, a set of spontaneous dialogues from a German TV talk-show. The regression analysis, in terms of correlation coefficient and mean absolute error, show that the binary key modeling is able to successfully capture speaker emotion characteristics. The proposed algorithm obtains comparable results to those reported on the literature while it relies on a much smaller set of acoustic descriptors. Furthermore, we also report on preliminary results based on the combination of the binary models, which brings further performance improvements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast music information retrieval with indirect matching.\n \n \n \n \n\n\n \n Hayashi, T.; Ishii, N.; and Yamaguchi, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1567-1571, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952573,\n  author = {T. Hayashi and N. Ishii and M. Yamaguchi},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Fast music information retrieval with indirect matching},\n  year = {2014},\n  pages = {1567-1571},\n  abstract = {This paper presents a fast content-based music information retrieval method. The high computational cost of similarity evaluation based on musical features between a pair of music clips is a crucial problem especially for searching large music database. To reduce the computational time in similarity evaluation process, the proposed method adopts an approach called indirect matching. In the approach, a small number of music clips called representative queries, which are randomly selected from a database, are used for fast computation. As an offline process, the similarities of each music clip in the database to the representative queries are recorded as a similarity table. In the online phase, the similarity between the actual query (the music clip given by a user) and each music clip in the database is quickly estimated by referring the similarity table. Experimental results have shown that the execution time of retrieval can be greatly reduced by the indirect matching without much deterioration of retrieval accuracy.},\n  keywords = {content-based retrieval;music;pattern matching;query processing;indirect matching approach;fast content-based music information retrieval method;high computational cost;music clip pair;music database;similarity evaluation process;computational time reduction;representative queries;similarity table;Databases;Accuracy;Music information retrieval;Vectors;Feature extraction;Computational efficiency;Silicon carbide;Music information retrieval;content-based retrieval;similarity evaluation of music;indirectmatching},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925059.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a fast content-based music information retrieval method. The high computational cost of similarity evaluation based on musical features between a pair of music clips is a crucial problem especially for searching large music database. To reduce the computational time in similarity evaluation process, the proposed method adopts an approach called indirect matching. In the approach, a small number of music clips called representative queries, which are randomly selected from a database, are used for fast computation. As an offline process, the similarities of each music clip in the database to the representative queries are recorded as a similarity table. In the online phase, the similarity between the actual query (the music clip given by a user) and each music clip in the database is quickly estimated by referring the similarity table. Experimental results have shown that the execution time of retrieval can be greatly reduced by the indirect matching without much deterioration of retrieval accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hyper-spectral image analysis with partially-latent regression.\n \n \n \n \n\n\n \n Deleforge, A.; Forbes, F.; and Horaud, R.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1572-1576, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Hyper-spectralPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952574,\n  author = {A. Deleforge and F. Forbes and R. Horaud},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Hyper-spectral image analysis with partially-latent regression},\n  year = {2014},\n  pages = {1572-1576},\n  abstract = {The analysis of hyper-spectral images is often needed to recover physical properties of planets. To address this inverse problem, the use of learning methods have been considered with the advantage that, once a relationship between physical parameters and spectra has been established through training, the learnt relationship can be used to estimate parameters from new images underpinned by the same physical model. Within this framework, we propose a partially-latent regression method which maps high-dimensional inputs (spectral images) onto low-dimensional responses (physical parameters). We introduce a novel regression method that combines a Gaussian mixture of locally-linear mappings with a partially-latent variable model. While the former makes high-dimensional regression tractable, the latter enables to deal with physical parameters that cannot be observed or, more generally, with data contaminated by experimental artifacts that cannot be explained with noise models. The method is illustrated on images collected from the Mars planet.},\n  keywords = {geophysical image processing;hyperspectral imaging;Mars;regression analysis;hyper-spectral image analysis;novel partially-latent regression method;learning methods;Gaussian mixture;Mars planet;Mars;Ice;Training;Maximum likelihood estimation;Kernel;Databases;Hyper-spectral images;Regression;Dimension reduction;Mixture models;Latent variable model},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923195.pdf},\n}\n\n
\n
\n\n\n
\n The analysis of hyper-spectral images is often needed to recover physical properties of planets. To address this inverse problem, the use of learning methods have been considered with the advantage that, once a relationship between physical parameters and spectra has been established through training, the learnt relationship can be used to estimate parameters from new images underpinned by the same physical model. Within this framework, we propose a partially-latent regression method which maps high-dimensional inputs (spectral images) onto low-dimensional responses (physical parameters). We introduce a novel regression method that combines a Gaussian mixture of locally-linear mappings with a partially-latent variable model. While the former makes high-dimensional regression tractable, the latter enables to deal with physical parameters that cannot be observed or, more generally, with data contaminated by experimental artifacts that cannot be explained with noise models. The method is illustrated on images collected from the Mars planet.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fusion of multispectral and hyperspectral images based on sparse representation.\n \n \n \n \n\n\n \n Wei, Q.; Bioucas-Dias, J. M.; Dobigeon, N.; and Tourneret, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1577-1581, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"FusionPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952575,\n  author = {Q. Wei and J. M. Bioucas-Dias and N. Dobigeon and J. Tourneret},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Fusion of multispectral and hyperspectral images based on sparse representation},\n  year = {2014},\n  pages = {1577-1581},\n  abstract = {This paper presents an algorithm based on sparse representation for fusing hyperspectral and multispectral images. The observed images are assumed to be obtained by spectral or spatial degradations of the high resolution hyperspectral image to be recovered. Based on this forward model, the fusion process is formulated as an inverse problem whose solution is determined by optimizing an appropriate criterion. To incorporate additional spatial information within the objective criterion, a regularization term is carefully designed, relying on a sparse decomposition of the scene on a set of dictionaries. The dictionaries and the corresponding supports of active coding coefficients are learned from the observed images. Then, conditionally on these dictionaries and supports, the fusion problem is solved by iteratively optimizing with respect to the target image (using the alternating direction method of multipliers) and the coding coefficients. Simulation results demonstrate the efficiency of the proposed fusion method when compared with the state-of-the-art.},\n  keywords = {decomposition;dictionaries;geophysical image processing;hyperspectral imaging;image coding;image fusion;image representation;image resolution;iterative methods;hyperspectral image fusion;multispectral image fusion;sparse image representation;hyperspectral image resolution;inverse problem;regularization term;sparse decomposition;dictionary;image coding;iterative optimization;alternating multiplier direction method;Dictionaries;Optimization;Hyperspectral imaging;Image resolution;Bayes methods;Image fusion;hyperspectral image;multispectral image;sparse representation;alternating direction method of multipliers (ADMM)},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922355.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents an algorithm based on sparse representation for fusing hyperspectral and multispectral images. The observed images are assumed to be obtained by spectral or spatial degradations of the high resolution hyperspectral image to be recovered. Based on this forward model, the fusion process is formulated as an inverse problem whose solution is determined by optimizing an appropriate criterion. To incorporate additional spatial information within the objective criterion, a regularization term is carefully designed, relying on a sparse decomposition of the scene on a set of dictionaries. The dictionaries and the corresponding supports of active coding coefficients are learned from the observed images. Then, conditionally on these dictionaries and supports, the fusion problem is solved by iteratively optimizing with respect to the target image (using the alternating direction method of multipliers) and the coding coefficients. Simulation results demonstrate the efficiency of the proposed fusion method when compared with the state-of-the-art.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust minimum volume simplex analysis for hyperspectral unmixing.\n \n \n \n \n\n\n \n Agathos, A.; Li, J.; Bioucas-Dias, J. M.; and Plaza, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1582-1586, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952576,\n  author = {A. Agathos and J. Li and J. M. Bioucas-Dias and A. Plaza},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Robust minimum volume simplex analysis for hyperspectral unmixing},\n  year = {2014},\n  pages = {1582-1586},\n  abstract = {Most blind hyperspectral unmixing methods exploit convex geometry properties of hyperspectral data. The minimum volume simplex analysis (MVSA) is one of such methods which, as many others, estimates the minimum volume (MV) simplex where the measured vectors live. MVSA was conceived to circumvent the matrix factorization step often implemented by MV based algorithms and also to cope with outliers, which compromise the results produced by MV algorithms. Inspired by the recently proposed robust minimum volume estimation (RMVES) algorithm, we herein introduce the robust MVSA (RMVSA), which is a version of MVSA robust to noise. As in RMVES, the robustness is achieved by employing chance constraints, which control the volume of the resulting simplex. RMVSA differs, however, substantially from RMVES in the way optimization is carried out. The effectiveness of RVMSA is illustrated by comparing its performance in simulated data with the state-of-the-art.},\n  keywords = {convex programming;geophysical image processing;hyperspectral imaging;robust minimum volume simplex analysis;blind hyperspectral unmixing methods;convex geometry properties;hyperspectral data;MVSA;matrix factorization;robust minimum volume estimation;RMVES algorithm;Hyperspectral imaging;Robustness;Vectors;Signal processing algorithms;Noise;Hyperspectral imaging;spectral unmixing;endmember identification;minimum volume simplex analysis (MVSA);chance constraints},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925327.pdf},\n}\n\n
\n
\n\n\n
\n Most blind hyperspectral unmixing methods exploit convex geometry properties of hyperspectral data. The minimum volume simplex analysis (MVSA) is one of such methods which, as many others, estimates the minimum volume (MV) simplex where the measured vectors live. MVSA was conceived to circumvent the matrix factorization step often implemented by MV based algorithms and also to cope with outliers, which compromise the results produced by MV algorithms. Inspired by the recently proposed robust minimum volume estimation (RMVES) algorithm, we herein introduce the robust MVSA (RMVSA), which is a version of MVSA robust to noise. As in RMVES, the robustness is achieved by employing chance constraints, which control the volume of the resulting simplex. RMVSA differs, however, substantially from RMVES in the way optimization is carried out. The effectiveness of RVMSA is illustrated by comparing its performance in simulated data with the state-of-the-art.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A stochastic 3MG algorithm with application to 2D filter identification.\n \n \n \n \n\n\n \n Chouzenoux, E.; Pesquet, J.; and Florescu, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1587-1591, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952577,\n  author = {E. Chouzenoux and J. Pesquet and A. Florescu},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A stochastic 3MG algorithm with application to 2D filter identification},\n  year = {2014},\n  pages = {1587-1591},\n  abstract = {Stochastic optimization plays an important role in solving many problems encountered in machine learning or adaptive processing. In this context, the second-order statistics of the data are often unknown a priori or their direct computation is too intensive, and they have to be estimated on-line from the related signals. In the context of batch optimization of an objective function being the sum of a data fidelity term and a penalization (e.g. a sparsity promoting function), Majorize-Minimize (MM) subspace methods have recently attracted much interest since they are fast, highly flexible and effective in ensuring convergence. The goal of this paper is to show how these methods can be successfully extended to the case when the cost function is replaced by a sequence of stochastic approximations of it. Simulation results illustrate the good practical performance of the proposed MM Memory Gradient (3MG) algorithm when applied to 2D filter identification.},\n  keywords = {approximation theory;filtering theory;higher order statistics;optimisation;stochastic 3MG algorithm;2D filter identification;machine learning;adaptive processing;second-order statistics;batch optimization;majorize-minimize subspace methods;MM memory gradient algorithm;data fidelity term;stochastic approximations;Optimization;Signal processing algorithms;Kernel;Convergence;Context;Approximation methods;Algorithm design and analysis;stochastic approximation;optimization;subspace algorithms;memory gradient methods;descent methods;recursive algorithms;majorization-minimization;filter identification;Newton method;sparsity;machine learning;adaptive filtering},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925521.pdf},\n}\n\n
\n
\n\n\n
\n Stochastic optimization plays an important role in solving many problems encountered in machine learning or adaptive processing. In this context, the second-order statistics of the data are often unknown a priori or their direct computation is too intensive, and they have to be estimated on-line from the related signals. In the context of batch optimization of an objective function being the sum of a data fidelity term and a penalization (e.g. a sparsity promoting function), Majorize-Minimize (MM) subspace methods have recently attracted much interest since they are fast, highly flexible and effective in ensuring convergence. The goal of this paper is to show how these methods can be successfully extended to the case when the cost function is replaced by a sequence of stochastic approximations of it. Simulation results illustrate the good practical performance of the proposed MM Memory Gradient (3MG) algorithm when applied to 2D filter identification.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Total Variation denoising using iterated conditional expectation.\n \n \n \n \n\n\n \n Louchet, C.; and Moisan, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1592-1596, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"TotalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952578,\n  author = {C. Louchet and L. Moisan},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Total Variation denoising using iterated conditional expectation},\n  year = {2014},\n  pages = {1592-1596},\n  abstract = {We propose a new variant of the celebrated Total Variation image denoising model of Rudin, Osher and Fatemi, which provides results very similar to the Bayesian posterior mean variant (TV-LSE) while showing a much better computational efficiency. This variant is based on an iterative procedure which is proved to converge linearly to a fixed point satisfying a marginal conditional mean property. The implementation is simple, provided numerical precision issues are correctly handled. Experiments show that the proposed variant yields results that are very close to those obtained with TV-LSE and avoids as well the so-called staircasing artifact observed with classical Total Variation denoising.},\n  keywords = {image denoising;iterative methods;iterated conditional expectation;total variation image denoising model;Bayesian posterior mean variant;TV-LSE;iterative procedure;marginal conditional mean property;staircasing artifact;fixed point;Convergence;Computational modeling;Noise reduction;Mathematical model;TV;Ice;Noise;image denoising;total variation;posterior mean;marginal conditional mean;staircasing effect},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925593.pdf},\n}\n\n
\n
\n\n\n
\n We propose a new variant of the celebrated Total Variation image denoising model of Rudin, Osher and Fatemi, which provides results very similar to the Bayesian posterior mean variant (TV-LSE) while showing a much better computational efficiency. This variant is based on an iterative procedure which is proved to converge linearly to a fixed point satisfying a marginal conditional mean property. The implementation is simple, provided numerical precision issues are correctly handled. Experiments show that the proposed variant yields results that are very close to those obtained with TV-LSE and avoids as well the so-called staircasing artifact observed with classical Total Variation denoising.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Small-variance asymptotics of hidden Potts-MRFS: Application to fast Bayesian image segmentation.\n \n \n \n \n\n\n \n Pereyra, M.; and McLaughliny, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1597-1601, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n Paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952579,\n  author = {M. Pereyra and S. McLaughliny},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Small-variance asymptotics of hidden Potts-MRFS: Application to fast Bayesian image segmentation},\n  year = {2014},\n  pages = {1597-1601},\n  abstract = {This paper presents a new approximate Bayesian estimator for hidden Potts-Markov random fields, with application to fast K-class image segmentation. The estimator is derived by conducting a small-variance-asymptotic analysis of an augmented Bayesian model in which the spatial regularisation and the integer-constrained terms of the Potts model are decoupled. This leads to a new image segmentation methodology that can be efficiently implemented in large 2D and 3D scenarios by using modern convex optimisation techniques. Experimental results on synthetic and real images as well as comparisons with state-of-the-art algorithms confirm that the proposed methodology converges extremely fast and produces accurate segmentation results in only few iterations.},\n  keywords = {Bayes methods;convex programming;hidden Markov models;image segmentation;MRFS;hidden Potts-Markov random fields;small-variance-asymptotic analysis;augmented Bayesian model;spatial regularisation;integer-constrained terms;image segmentation methodology;convex optimisation techniques;2D scenarios;3D scenarios;synthetic images;real images;Image segmentation;Hidden Markov models;Bayes methods;Optimization;Educational institutions;Three-dimensional displays;Approximation methods;Image segmentation;Bayesian methods;spatial mixture models;Potts Markov random field;convex optimisation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922431.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a new approximate Bayesian estimator for hidden Potts-Markov random fields, with application to fast K-class image segmentation. The estimator is derived by conducting a small-variance-asymptotic analysis of an augmented Bayesian model in which the spatial regularisation and the integer-constrained terms of the Potts model are decoupled. This leads to a new image segmentation methodology that can be efficiently implemented in large 2D and 3D scenarios by using modern convex optimisation techniques. Experimental results on synthetic and real images as well as comparisons with state-of-the-art algorithms confirm that the proposed methodology converges extremely fast and produces accurate segmentation results in only few iterations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n CG-TMO: A local tone mapping for computer graphics generated content.\n \n \n \n \n\n\n \n Banterle, F.; Artusi, A.; Banterle, P. F.; and Scopigno, R.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1602-1606, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CG-TMO:Paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952580,\n  author = {F. Banterle and A. Artusi and P. F. Banterle and R. Scopigno},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {CG-TMO: A local tone mapping for computer graphics generated content},\n  year = {2014},\n  pages = {1602-1606},\n  abstract = {Physically based renderers produce high quality images with high dynamic range (HDR) values. Therefore, these images need to be tone mapped in order to be displayed on low dynamic range (LDR) displays. A typical approach is to blindly apply tone mapping operators without taking advantage of the extra information that comes for free from the modeling process for creating a 3D scene. In this paper, we propose a novel pipeline for tone mapping high dynamic range (HDR) images which are generated using physically based renderers. Our work exploits information of a 3D scene, such as geometry, materials, luminaries, etc. This allows to limit the assumptions that are typically made during the tone mapping step. As consequence of this, we will show improvements in term of quality while keeping the entire process straightforward.},\n  keywords = {natural scenes;rendering (computer graphics);CG-TMO;local tone mapping;computer graphics generated content;high-quality images;high-dynamic range values;low-dynamic range displays;LDR displays;HDR images;3D scene information;physically based renderers;Three-dimensional displays;Rendering (computer graphics);Real-time systems;Pipelines;Light sources;Dynamic range;High Dynamic Range Imaging;Tone Mapping;Tone Mapping Operator;Computer Graphics;Physically Based Rendering;Virtual Environments;Interactive;Real-Time applications},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923151.pdf},\n}\n\n
\n
\n\n\n
\n Physically based renderers produce high quality images with high dynamic range (HDR) values. Therefore, these images need to be tone mapped in order to be displayed on low dynamic range (LDR) displays. A typical approach is to blindly apply tone mapping operators without taking advantage of the extra information that comes for free from the modeling process for creating a 3D scene. In this paper, we propose a novel pipeline for tone mapping high dynamic range (HDR) images which are generated using physically based renderers. Our work exploits information of a 3D scene, such as geometry, materials, luminaries, etc. This allows to limit the assumptions that are typically made during the tone mapping step. As consequence of this, we will show improvements in term of quality while keeping the entire process straightforward.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improved tone mapping operator for HDR coding optimizing the distortion/spatial complexity trade-off.\n \n \n \n \n\n\n \n Lauga, P.; Valenzise, G.; Chierchia, G.; and Dufaux, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1607-1611, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952581,\n  author = {P. Lauga and G. Valenzise and G. Chierchia and F. Dufaux},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Improved tone mapping operator for HDR coding optimizing the distortion/spatial complexity trade-off},\n  year = {2014},\n  pages = {1607-1611},\n  abstract = {A common paradigm to code high dynamic range (HDR) image/video content is based on tone-mapping HDR pictures to low dynamic range (LDR), in order to obtain backward compatibility and use existing coding tools, and then use inverse tone mapping at the decoder to predict the original HDR signal. Clearly, the choice of a proper tone mapping is essential in order to achieve good coding performance. The state-of-the-art to design the optimal tone mapping operator (TMO) minimizes the mean-square-error distortion between the original and the predicted HDR image. In this paper, we argue that this is suboptimal in rate-distortion sense, and we propose a more effective TMO design strategy that takes into account also the spatial complexity (which is a proxy for the bitrate) of the coded LDR image. Our results show that the proposed optimization approach enables to obtain substantial coding gain with respect to the minimum-MSE TMO.},\n  keywords = {image coding;mean square error methods;optimisation;improved tone mapping operator for;HDR image coding;high dynamic range image coding;mean-square-error distortion;TMO design strategy;spatial complexity;optimization approach;substantial coding gain;Transform coding;Image coding;Image reconstruction;Encoding;Dynamic range;Complexity theory;Rate-distortion;High dynamic range;coding;convex optimization;spatial complexity},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926285.pdf},\n}\n\n
\n
\n\n\n
\n A common paradigm to code high dynamic range (HDR) image/video content is based on tone-mapping HDR pictures to low dynamic range (LDR), in order to obtain backward compatibility and use existing coding tools, and then use inverse tone mapping at the decoder to predict the original HDR signal. Clearly, the choice of a proper tone mapping is essential in order to achieve good coding performance. The state-of-the-art to design the optimal tone mapping operator (TMO) minimizes the mean-square-error distortion between the original and the predicted HDR image. In this paper, we argue that this is suboptimal in rate-distortion sense, and we propose a more effective TMO design strategy that takes into account also the spatial complexity (which is a proxy for the bitrate) of the coded LDR image. Our results show that the proposed optimization approach enables to obtain substantial coding gain with respect to the minimum-MSE TMO.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rate distortion optimized tone curve for high dynamic range compression.\n \n \n \n \n\n\n \n Le Pendu, M.; Guillemot, C.; and Thoreau, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1612-1616, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RatePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952582,\n  author = {M. {Le Pendu} and C. Guillemot and D. Thoreau},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Rate distortion optimized tone curve for high dynamic range compression},\n  year = {2014},\n  pages = {1612-1616},\n  abstract = {In this paper, we define a reversible tone mapping-operator (TMO) for efficient compression of High Dynamic Range (HDR) images using a Low Dynamic Range (LDR) encoder. In our compression scheme, the HDR image is tone mapped and encoded. The inverse tone curve is also encoded, so that the decoder can reconstruct the HDR image from the LDR version. Based on a statistical model of the encoder error and assumptions on the rate of the encoded LDR image, we find a closed form solution for the optimal tone curve with respect to the rate and the mean square error (MSE) of the reconstructed HDR image. It is shown that the proposed method gives superior compression performance compared to existing tone mapping operators.},\n  keywords = {data compression;decoding;image coding;image reconstruction;mean square error methods;rate distortion theory;statistical analysis;MSE;mean square error;statistical model;HDR image reconstruction;inverse tone curve;LDR encoder;low dynamic range encoder;HDR image compression;reversible tone mapping-operator;TMO;high dynamic range image compression;rate distortion optimized tone curve;Image coding;Dynamic range;Rate-distortion;Bit rate;Image reconstruction;Mathematical model;Standards;High Dynamic Range (HDR);Tone Mapping;Companding;Gaussian Mixture Model (GMM);HEVC},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922857.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we define a reversible tone mapping-operator (TMO) for efficient compression of High Dynamic Range (HDR) images using a Low Dynamic Range (LDR) encoder. In our compression scheme, the HDR image is tone mapped and encoded. The inverse tone curve is also encoded, so that the decoder can reconstruct the HDR image from the LDR version. Based on a statistical model of the encoder error and assumptions on the rate of the encoded LDR image, we find a closed form solution for the optimal tone curve with respect to the rate and the mean square error (MSE) of the reconstructed HDR image. It is shown that the proposed method gives superior compression performance compared to existing tone mapping operators.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evaluation of LDR, tone mapped and HDR stereo matching using cost-volume filtering approach.\n \n \n \n \n\n\n \n Akhavan, T.; Yoo, H.; and Gelautz, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1617-1621, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EvaluationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952583,\n  author = {T. Akhavan and H. Yoo and M. Gelautz},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Evaluation of LDR, tone mapped and HDR stereo matching using cost-volume filtering approach},\n  year = {2014},\n  pages = {1617-1621},\n  abstract = {We present stereo matching solutions based on a fast cost-volume filtering approach for High Dynamic Range (HDR) scenes. Multi-exposed stereo images are captured and used to generate HDR and Tone Mapped (TM) images of the left and right views. We perform stereo matching on conventional, Low Dynamic Range (LDR) images, original HDR, as well as TM images by customizing the matching algorithm for each of them. An evaluation on the disparity maps computed from the different approaches demonstrates that stereo matching on HDR images outperforms conventional LDR stereo matching and TM stereo matching, with the most discriminative disparity maps achieved by using HDR color information and log-luminance gradient values for matching cost calculation.},\n  keywords = {filtering theory;image colour analysis;image matching;stereo image processing;LDR evaluation;HDR stereo matching algorithm;tone mapped stereo matching algorithm;fast cost-volume filtering approach;high dynamic range scenes;TM images;multiexposed stereo images;low dynamic range images;disparity maps evaluation;LDR stereo matching;discriminative disparity maps;HDR color information;log-luminance gradient values;Stereo vision;Dynamic range;Lighting;Image color analysis;Image coding;Videos;Stereo Matching;Low Dynamic Range (LDR);High Dynamic Range (HDR);Tone Mapping (TM)},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922909.pdf},\n}\n\n
\n
\n\n\n
\n We present stereo matching solutions based on a fast cost-volume filtering approach for High Dynamic Range (HDR) scenes. Multi-exposed stereo images are captured and used to generate HDR and Tone Mapped (TM) images of the left and right views. We perform stereo matching on conventional, Low Dynamic Range (LDR) images, original HDR, as well as TM images by customizing the matching algorithm for each of them. An evaluation on the disparity maps computed from the different approaches demonstrates that stereo matching on HDR images outperforms conventional LDR stereo matching and TM stereo matching, with the most discriminative disparity maps achieved by using HDR color information and log-luminance gradient values for matching cost calculation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analysis of the consequences of data quality and calibration on 3D HDR image generation.\n \n \n \n \n\n\n \n Bonnard, J.; Valette, G.; Nourrit, J.; and Loscos, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1622-1626, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AnalysisPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952584,\n  author = {J. Bonnard and G. Valette and J. Nourrit and C. Loscos},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Analysis of the consequences of data quality and calibration on 3D HDR image generation},\n  year = {2014},\n  pages = {1622-1626},\n  abstract = {We propose to analyze consequences of input data quality on 3D HDR image generation. Input data are images from different viewpoints and different exposures. The ease and precision of 3D HDR images merging depends on how input data are created or acquired. We study the benefits and drawbacks of using an inbuilt multiview camera against a single camera with a simulation on computer generated images. This work builds on a previously published 3D HDR method based on disparity to guide HDR matching. In this paper, we outline the errors that occur when too little precaution is taken, coming on the one hand from poor pixel quality and on the other hand from poor geometrical setup.},\n  keywords = {calibration;cameras;image processing;data quality consequences;data calibration;3D HDR image generation;multiview camera;computer generated images;3D HDR method;HDR matching;geometrical setup;Three-dimensional displays;Image color analysis;Cameras;PSNR;Colored noise;Calibration;3D HDR videos;camera noises},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922861.pdf},\n}\n\n
\n
\n\n\n
\n We propose to analyze consequences of input data quality on 3D HDR image generation. Input data are images from different viewpoints and different exposures. The ease and precision of 3D HDR images merging depends on how input data are created or acquired. We study the benefits and drawbacks of using an inbuilt multiview camera against a single camera with a simulation on computer generated images. This work builds on a previously published 3D HDR method based on disparity to guide HDR matching. In this paper, we outline the errors that occur when too little precaution is taken, coming on the one hand from poor pixel quality and on the other hand from poor geometrical setup.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Real-time video based lighting using GPU raytracing.\n \n \n \n \n\n\n \n Kronander, J.; Dahlin, J.; Jönsson, D.; Kok, M.; Schön, T. B.; and Unger, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1627-1631, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Real-timePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952585,\n  author = {J. Kronander and J. Dahlin and D. Jönsson and M. Kok and T. B. Schön and J. Unger},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Real-time video based lighting using GPU raytracing},\n  year = {2014},\n  pages = {1627-1631},\n  abstract = {The recent introduction of high dynamic range (HDR) video cameras has enabled the development of image based lighting techniques for rendering virtual objects illuminated with temporally varying real world illumination. A key challenge in this context is that rendering realistic objects illuminated with video environment maps is computationally demanding. In this work, we present a GPU based rendering system based on the NVIDIA OptiX [1] framework, enabling real time raytracing of scenes illuminated with video environment maps. For this purpose, we explore and compare several Monte Carlo sampling approaches, including bidirectional importance sampling, multiple importance sampling and sequential Monte Carlo samplers. While previous work have focused on synthetic data and overly simple environment map sequences, we have collected a set of real world dynamic environment map sequences using a state-of-art HDR video camera for evaluation and comparisons. Based on the result we show that in contrast to CPU renderers, for a GPU implementation multiple importance sampling and bidirectional importance sampling provide better results than sequential Monte Carlo samplers in terms of flexibility, computational efficiency and robustness.},\n  keywords = {graphics processing units;lighting;Monte Carlo methods;ray tracing;video cameras;environment map sequences;synthetic data;sequential Monte Carlo samplers;multiple importance sampling;bidirectional importance sampling;NVIDIA OptiX framework;video environment maps;virtual object rendering;image based lighting;video cameras;HDR;high dynamic range;real-time video based lighting;GPU ray tracing;Lighting;Monte Carlo methods;Rendering (computer graphics);Cameras;Probes;Streaming media;Graphics processing units;Image Based Lighting;HDR Video;Video Based Lighting},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926583.pdf},\n}\n\n
\n
\n\n\n
\n The recent introduction of high dynamic range (HDR) video cameras has enabled the development of image based lighting techniques for rendering virtual objects illuminated with temporally varying real world illumination. A key challenge in this context is that rendering realistic objects illuminated with video environment maps is computationally demanding. In this work, we present a GPU based rendering system based on the NVIDIA OptiX [1] framework, enabling real time raytracing of scenes illuminated with video environment maps. For this purpose, we explore and compare several Monte Carlo sampling approaches, including bidirectional importance sampling, multiple importance sampling and sequential Monte Carlo samplers. While previous work have focused on synthetic data and overly simple environment map sequences, we have collected a set of real world dynamic environment map sequences using a state-of-art HDR video camera for evaluation and comparisons. Based on the result we show that in contrast to CPU renderers, for a GPU implementation multiple importance sampling and bidirectional importance sampling provide better results than sequential Monte Carlo samplers in terms of flexibility, computational efficiency and robustness.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Parameter estimation in Bayesian Blind Deconvolution with super Gaussian image priors.\n \n \n \n \n\n\n \n Vega, M.; Molina, R.; and Katsaggelos, A. K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1632-1636, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ParameterPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952586,\n  author = {M. Vega and R. Molina and A. K. Katsaggelos},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Parameter estimation in Bayesian Blind Deconvolution with super Gaussian image priors},\n  year = {2014},\n  pages = {1632-1636},\n  abstract = {Super Gaussian (SG) distributions have proven to be very powerful prior models to induce sparsity in Bayesian Blind Deconvolution (BD) problems. Their conjugate based representations make them specially attractive when Variational Bayes (VB) inference is used since their variational parameters can be calculated in closed form with the sole knowledge of the energy function of the prior model. In this work we show how the introduction in the SG distribution of a global strength (not necessary scale) parameter can be used to improve the quality of the obtained restorations as well as to introduce additional information on the global weight of the prior. A model to estimate the new unknown parameter within the Bayesian framework is provided. Experimental results, on both synthetic and real images, demonstrate the effectiveness of the proposed approach.},\n  keywords = {Bayes methods;deconvolution;Gaussian distribution;image restoration;inference mechanisms;parameter estimation;parameter estimation;Bayesian blind deconvolution problems;super Gaussian image priors;conjugate based representations;variational Bayes inference;energy function sole knowledge;image restoration;Deconvolution;Bayes methods;Estimation;Kernel;Image restoration;Histograms;Electronic mail;Bayesian methods;image processing;image restoration;Super Gaussian;blind deconvolution},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922671.pdf},\n}\n\n
\n
\n\n\n
\n Super Gaussian (SG) distributions have proven to be very powerful prior models to induce sparsity in Bayesian Blind Deconvolution (BD) problems. Their conjugate based representations make them specially attractive when Variational Bayes (VB) inference is used since their variational parameters can be calculated in closed form with the sole knowledge of the energy function of the prior model. In this work we show how the introduction in the SG distribution of a global strength (not necessary scale) parameter can be used to improve the quality of the obtained restorations as well as to introduce additional information on the global weight of the prior. A model to estimate the new unknown parameter within the Bayesian framework is provided. Experimental results, on both synthetic and real images, demonstrate the effectiveness of the proposed approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Restoration of images corrupted by mixed Gaussian-impulse noise by iterative soft-hard thresholding.\n \n \n \n \n\n\n \n Filipović, M.; and Jukić, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1637-1641, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RestorationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952587,\n  author = {M. Filipović and A. Jukić},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Restoration of images corrupted by mixed Gaussian-impulse noise by iterative soft-hard thresholding},\n  year = {2014},\n  pages = {1637-1641},\n  abstract = {We address the problem of restoration of images which have been affected by impulse or a combination of impulse and Gaussian noise. We propose a patch-based approach that exploits approximate sparse representations of image patches in learned dictionaries. For every patch, sparse representation in a dictionary is enforced by ℓ1-norm penalty, and sparsity of the residual is enforced by ℓ0-quasi-norm penalty. The obtained non-convex problem is solved iteratively by a combination of soft and hard thresholding, and a proof of convergence to a local minimum is given. Experimental evaluation suggests that the proposed approach can produce state-of-the-art results for some types of images, especially in terms of the structural similarity (SSIM) measure. In addition, the proposed iterative thresholding algorithm could possibly be applied to general inverse problems.},\n  keywords = {concave programming;Gaussian noise;image representation;image restoration;image segmentation;impulse noise;inverse problems;iterative methods;image restoration;mixed Gaussian-impulse noise;iterative soft-hard thresholding;sparse representations;image patches;ℓ1-norm penalty;ℓ0-quasinorm penalty;nonconvex problem;structural similarity measure;SSIM;inverse problems;Denoising;Impulse Noise;Sparse Representation;Dictionary;Thresholding},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924259.pdf},\n}\n\n
\n
\n\n\n
\n We address the problem of restoration of images which have been affected by impulse or a combination of impulse and Gaussian noise. We propose a patch-based approach that exploits approximate sparse representations of image patches in learned dictionaries. For every patch, sparse representation in a dictionary is enforced by ℓ1-norm penalty, and sparsity of the residual is enforced by ℓ0-quasi-norm penalty. The obtained non-convex problem is solved iteratively by a combination of soft and hard thresholding, and a proof of convergence to a local minimum is given. Experimental evaluation suggests that the proposed approach can produce state-of-the-art results for some types of images, especially in terms of the structural similarity (SSIM) measure. In addition, the proposed iterative thresholding algorithm could possibly be applied to general inverse problems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse reconstruction of facial expressions with localized gabor moments.\n \n \n \n \n\n\n \n Mourão, A.; Borges, P.; Correia, N.; and Magalhães, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1642-1646, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952588,\n  author = {A. Mourão and P. Borges and N. Correia and J. Magalhães},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse reconstruction of facial expressions with localized gabor moments},\n  year = {2014},\n  pages = {1642-1646},\n  abstract = {Facial expression recognition depends on the detection of a few subtle facial feature traces. EMFACS (Emotion Facial Action Coding System) is a taxonomy of face muscle movements and positions called Action Units (AU) [1]. AUs can be combined to describe complex facial expressions. We propose to (1) deconstruct facial expressions into face regions, grouping AUs by their proximity and contour direction; (2) recognize facial expressions by combining sparse reconstruction methods with face regions. We aim at finding a minimal set of AU to represent a given expression and apply l1 reconstruction to compute the deviation from the average face as an additive model of facial micro-expressions (the AUs). We compared our proposal to existing methods on the CK+ [2] and JAFFE datasets [3]. Our experiments indicate that sparse reconstruction with l1 penalty outperforms SVM and k-NN baselines. On the CK+ dataset, the best accuracy (89.8%) was obtained using sparse reconstruction.},\n  keywords = {emotion recognition;face recognition;Gabor filters;image reconstruction;facial expression recognition;localized Gabor moments;EMFACS;emotion facial action coding system;face muscle movements;action units;contour direction;sparse reconstruction methods;face regions;facial feature traces;Face;Face recognition;Gabor filters;Gold;Image reconstruction;Dictionaries;Robustness;facial expression recognition;sparse reconstruction;Gabor wavelets},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924805.pdf},\n}\n\n
\n
\n\n\n
\n Facial expression recognition depends on the detection of a few subtle facial feature traces. EMFACS (Emotion Facial Action Coding System) is a taxonomy of face muscle movements and positions called Action Units (AU) [1]. AUs can be combined to describe complex facial expressions. We propose to (1) deconstruct facial expressions into face regions, grouping AUs by their proximity and contour direction; (2) recognize facial expressions by combining sparse reconstruction methods with face regions. We aim at finding a minimal set of AU to represent a given expression and apply l1 reconstruction to compute the deviation from the average face as an additive model of facial micro-expressions (the AUs). We compared our proposal to existing methods on the CK+ [2] and JAFFE datasets [3]. Our experiments indicate that sparse reconstruction with l1 penalty outperforms SVM and k-NN baselines. On the CK+ dataset, the best accuracy (89.8%) was obtained using sparse reconstruction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A compressible template protection scheme for face recognition based on sparse representation.\n \n \n \n \n\n\n \n Muraki, Y.; Furukawa, M.; Fujiyoshi, M.; Tonomura, Y.; and Kiya, H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1647-1651, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952589,\n  author = {Y. Muraki and M. Furukawa and M. Fujiyoshi and Y. Tonomura and H. Kiya},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A compressible template protection scheme for face recognition based on sparse representation},\n  year = {2014},\n  pages = {1647-1651},\n  abstract = {In applications using face recognition, facial images called templates should be securely managed for privacy protection and security. This paper studies a sparse representation-based face recognition system with a new template protection scheme. The proposed scheme uses two transformations for template protection; random pixel permutation and downsampling. Thanks to these transformations, protected templates can be efficiently compressed, whereas conventional schemes do not offer such functionality. Experimental results demonstrate that the system does not degrade face recognition performance even facial templates are protected. Thus, the proposed scheme can reduce the size of the template repository in practical face recognition systems.},\n  keywords = {face recognition;image representation;sparse matrices;compressible template protection scheme;face recognition;sparse representation;facial images;privacy protection;random pixel permutation;downsampling;Image coding;Abstracts;Noise;Cryptography;Barium;Robustness;Biomedical imaging;Cancelable Biometrics;Authentication;Random Projection;Noise Addition;Compressed Sensing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924869.pdf},\n}\n\n
\n
\n\n\n
\n In applications using face recognition, facial images called templates should be securely managed for privacy protection and security. This paper studies a sparse representation-based face recognition system with a new template protection scheme. The proposed scheme uses two transformations for template protection; random pixel permutation and downsampling. Thanks to these transformations, protected templates can be efficiently compressed, whereas conventional schemes do not offer such functionality. Experimental results demonstrate that the system does not degrade face recognition performance even facial templates are protected. Thus, the proposed scheme can reduce the size of the template repository in practical face recognition systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Near light source location estimation using illumination of a diffused planar background.\n \n \n \n \n\n\n \n Chotikakamthorn, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1652-1656, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"NearPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952590,\n  author = {N. Chotikakamthorn},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Near light source location estimation using illumination of a diffused planar background},\n  year = {2014},\n  pages = {1652-1656},\n  abstract = {The problem of light source location estimation is considered. It is shown that the location of a near light source can be estimated from an optical-depth image pair using information available from an illuminated Lambertian planar background. The method estimates the projected source location on the planar background from the surface illumination gradient. The distance of the light source from the planar background, which is equivalent to its elevation angle, is estimated by fitting the radiance of the background surface as observed by an optical image, with those synthesized at different light distances. The fitting equation is formulated such that the possible existence of environment light can be taken into account. Experimental results with real images are provided.},\n  keywords = {augmented reality;computer vision;curve fitting;gradient methods;lighting;near light source location estimation problem;diffused planar background illumination;optical-depth image pair;illuminated Lambertian planar background;projected source location estimation;surface illumination gradient;background surface radiance fitting;light distances;computer vision;augmented reality;Light sources;Estimation;Lighting;Optical imaging;Cameras;Optical reflection;Light source estimation;Computer vision;Augmented reality;Shape from X},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925409.pdf},\n}\n\n
\n
\n\n\n
\n The problem of light source location estimation is considered. It is shown that the location of a near light source can be estimated from an optical-depth image pair using information available from an illuminated Lambertian planar background. The method estimates the projected source location on the planar background from the surface illumination gradient. The distance of the light source from the planar background, which is equivalent to its elevation angle, is estimated by fitting the radiance of the background surface as observed by an optical image, with those synthesized at different light distances. The fitting equation is formulated such that the possible existence of environment light can be taken into account. Experimental results with real images are provided.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A study on clustering-based image denoising: From global clustering to local grouping.\n \n \n \n \n\n\n \n Joneidi, M.; Sadeghi, M.; Sahraee-Ardakan, M.; Babaie-Zadeh, M.; and Jutten, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1657-1661, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952591,\n  author = {M. Joneidi and M. Sadeghi and M. Sahraee-Ardakan and M. Babaie-Zadeh and C. Jutten},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A study on clustering-based image denoising: From global clustering to local grouping},\n  year = {2014},\n  pages = {1657-1661},\n  abstract = {This paper studies denoising of images contaminated with additive white Gaussian noise (AWGN). In recent years, clustering-based methods have shown promising performances. In this paper we show that low-rank subspace clustering provides a suitable clustering problem that minimizes the lower bound on the MSE of the denoising, which is optimum for Gaussian noise. Solving the corresponding clustering problem is not easy. We study some global and local sub-optimal solutions already presented in the literature and show that those that solve a better approximation of our problem result in better performances. A simple image denoising method based on dictionary learning using the idea of gain-shaped K-means is also proposed as another global suboptimal solution for clustering.},\n  keywords = {AWGN;image denoising;mean square error methods;pattern clustering;clustering-based image denoising method;global clustering;local grouping;additive white Gaussian noise;AWGN;low-rank subspace clustering;lower bound;local sub-optimal solutions;dictionary learning;global suboptimal solution;gain-shaped K-means;mean square error;Dictionaries;Image denoising;Clustering algorithms;Noise reduction;AWGN;Noise measurement;Eigenvalues and eigenfunctions;Image denoising;data clustering;dictionary learning;sparse representation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925433.pdf},\n}\n\n
\n
\n\n\n
\n This paper studies denoising of images contaminated with additive white Gaussian noise (AWGN). In recent years, clustering-based methods have shown promising performances. In this paper we show that low-rank subspace clustering provides a suitable clustering problem that minimizes the lower bound on the MSE of the denoising, which is optimum for Gaussian noise. Solving the corresponding clustering problem is not easy. We study some global and local sub-optimal solutions already presented in the literature and show that those that solve a better approximation of our problem result in better performances. A simple image denoising method based on dictionary learning using the idea of gain-shaped K-means is also proposed as another global suboptimal solution for clustering.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Imagewarmness— A new perceptual feature for images and videos.\n \n \n \n\n\n \n Dimopoulos, M.; and Winkler, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1662-1666, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952592,\n  author = {M. Dimopoulos and T. Winkler},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Imagewarmness— A new perceptual feature for images and videos},\n  year = {2014},\n  pages = {1662-1666},\n  abstract = {Many basic, but very useful features, for characterizing an image or calculating the similarity between two images are based on color information. Psychological studies show that beyond the tone of color, different colors are also associated with different emotions. Thus, two colors that trigger the same impression are most likely considered to be more similar than two colors which trigger the opposite impression. We introduce a new feature called image warmness. It is based on the cold or warm impression that a single color triggers in the brain of the beholder. Image warmness provides a measure about how cold or warm an entire image is perceived by humans based on colors it contains. In a survey and evaluation with 90 images and 101 participants we show that the values of image warmness calculated by the proposed formula are close to the average rating of the survey participants.},\n  keywords = {image colour analysis;image warmness;perceptual feature;color information;cold impression;warm impression;Image color analysis;Tin;Quantization (signal);Psychology;Histograms;Videos;Semantics;image warmness;feature extraction;image similarity;human perception},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Many basic, but very useful features, for characterizing an image or calculating the similarity between two images are based on color information. Psychological studies show that beyond the tone of color, different colors are also associated with different emotions. Thus, two colors that trigger the same impression are most likely considered to be more similar than two colors which trigger the opposite impression. We introduce a new feature called image warmness. It is based on the cold or warm impression that a single color triggers in the brain of the beholder. Image warmness provides a measure about how cold or warm an entire image is perceived by humans based on colors it contains. In a survey and evaluation with 90 images and 101 participants we show that the values of image warmness calculated by the proposed formula are close to the average rating of the survey participants.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An epipolar-constrained prior for efficient search in multi-view scenarios.\n \n \n \n \n\n\n \n Bosch, I.; Salvador, J.; Pérez-Pellitero, E.; and Ruiz-Hidalgo, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1667-1671, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952613,\n  author = {I. Bosch and J. Salvador and E. Pérez-Pellitero and J. Ruiz-Hidalgo},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {An epipolar-constrained prior for efficient search in multi-view scenarios},\n  year = {2014},\n  pages = {1667-1671},\n  abstract = {In this paper we propose a novel framework for fast exploitation of multi-view cues with applicability in different image processing problems. In order to bring our proposed framework into practice, an epipolar-constrained prior is presented, onto which a random search algorithm is proposed to find good matches among the different views of the same scene. This algorithm includes a generalization of the local coherency in 2D images for multi-view wide-baseline cases. Experimental results show that the geometrical constraint allows a faster initial convergence when finding good matches. We present some applications of the proposed framework on classical image processing problems.},\n  keywords = {image processing;2D images;super resolution;deblurring;epipolar line;approximate nearest neighbor;random search algorithm;image processing;epipolar-constrained prior;Image resolution;Cameras;Image reconstruction;PSNR;Proposals;Computer vision;Super resolution;deblurring;epipolar line;approximate nearest neighbor},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922805.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we propose a novel framework for fast exploitation of multi-view cues with applicability in different image processing problems. In order to bring our proposed framework into practice, an epipolar-constrained prior is presented, onto which a random search algorithm is proposed to find good matches among the different views of the same scene. This algorithm includes a generalization of the local coherency in 2D images for multi-view wide-baseline cases. Experimental results show that the geometrical constraint allows a faster initial convergence when finding good matches. We present some applications of the proposed framework on classical image processing problems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimized size-adaptive feature extraction based on content-matched rational wavelet filters.\n \n \n \n \n\n\n \n Le, T.; Ziebarth, M.; Greiner, T.; and Heizmann, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1672-1676, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OptimizedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952614,\n  author = {T. Le and M. Ziebarth and T. Greiner and M. Heizmann},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Optimized size-adaptive feature extraction based on content-matched rational wavelet filters},\n  year = {2014},\n  pages = {1672-1676},\n  abstract = {One of the challenges of feature extraction in image processing is caused by the fact that objects originating from a feature class don't always appear in a unique size, and the feature sizes are diverse. Hence, a multiresolution analysis using wavelets should be suitable. Because of their integer scaling factors classical dyadic or M-channel wavelet filter banks often don't match very well the corresponding feature sizes occurring within the image. This paper presents a new method to optimally extract features in different sizes by designing a rational biorthogonal wavelet filter bank, which matches both the features' characteristics and the significant sizes of the most dominant features' sizes. This is achieved by matching the rational downsampling factor to the different feature sizes and matching the filter coefficients to the feature characteristics. The presented method is evaluated with the detection of defects on specular surfaces and of contaminations on manufactured metal surfaces.},\n  keywords = {channel bank filters;feature extraction;image matching;image resolution;image sampling;matched filters;wavelet transforms;optimized size-adaptive feature extraction;content-matched rational wavelet filters;image processing;feature sizes;multiresolution analysis;M-channel wavelet filter banks;integer scaling factor classical dyadic filter banks;rational biorthogonal wavelet filter bank;rational downsampling factor matching;filter coefficients;manufactured metal surfaces;specular surfaces;defect detection;contaminations;Feature extraction;Surface contamination;Surface waves;Surface treatment;Metals;Standards;Wavelet transform;Feature matched filters;Classification;Machine Vision},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925077.pdf},\n}\n\n
\n
\n\n\n
\n One of the challenges of feature extraction in image processing is caused by the fact that objects originating from a feature class don't always appear in a unique size, and the feature sizes are diverse. Hence, a multiresolution analysis using wavelets should be suitable. Because of their integer scaling factors classical dyadic or M-channel wavelet filter banks often don't match very well the corresponding feature sizes occurring within the image. This paper presents a new method to optimally extract features in different sizes by designing a rational biorthogonal wavelet filter bank, which matches both the features' characteristics and the significant sizes of the most dominant features' sizes. This is achieved by matching the rational downsampling factor to the different feature sizes and matching the filter coefficients to the feature characteristics. The presented method is evaluated with the detection of defects on specular surfaces and of contaminations on manufactured metal surfaces.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multiscale keypoint analysis with triangular biorthogonal wavelets via redundant lifting.\n \n \n \n \n\n\n \n Fujinoki, K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1677-1680, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"MultiscalePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952615,\n  author = {K. Fujinoki},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Multiscale keypoint analysis with triangular biorthogonal wavelets via redundant lifting},\n  year = {2014},\n  pages = {1677-1680},\n  abstract = {This paper presents an efficient approach for multiscale keypoint detection based on triangular biorthogonal wavelets. The detection scheme is simple and thus fast as only three isotropic directional components of an image obtained by multiscale decomposition with the triangular biorthogonal wavelets are used for keypoint localization at each scale. Redundant lifting is also considered and can be applied directly to calculate cumulative local energy distribution that is derived from the correction of the three directional components at each scale. This gives the efficient and accurate localization of keypoints including scale information. An experimental result shows that our method is better in the sense of the uniform distribution of keypoints compared with the conventional wavelet-based approach.},\n  keywords = {discrete wavelet transforms;image processing;uniform distribution;cumulative local energy distribution;multiscale decomposition;isotropic directional components;redundant lifting;triangular biorthogonal wavelets;multiscale keypoint analysis;Lattices;Discrete wavelet transforms;Image edge detection;Feature extraction;Signal resolution;Keypoint;discrete wavelet transform;triangular lattice;lifting;redundant transform},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925189.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents an efficient approach for multiscale keypoint detection based on triangular biorthogonal wavelets. The detection scheme is simple and thus fast as only three isotropic directional components of an image obtained by multiscale decomposition with the triangular biorthogonal wavelets are used for keypoint localization at each scale. Redundant lifting is also considered and can be applied directly to calculate cumulative local energy distribution that is derived from the correction of the three directional components at each scale. This gives the efficient and accurate localization of keypoints including scale information. An experimental result shows that our method is better in the sense of the uniform distribution of keypoints compared with the conventional wavelet-based approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Pornography detection using BossaNova video descriptor.\n \n \n \n \n\n\n \n Caetano, C.; Avila, S.; Guimarães, S.; and d. A. Araújo, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1681-1685, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PornographyPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952616,\n  author = {C. Caetano and S. Avila and S. Guimarães and A. d. A. Araújo},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Pornography detection using BossaNova video descriptor},\n  year = {2014},\n  pages = {1681-1685},\n  abstract = {In certain environments or for certain publics, porno-graphic content may be considered inappropriate, generating the need to be detected and filtered. Most works regarding pornography detection are based on the detection of human skin. However, a shortcoming of these kind of approaches is related to the high false positive rate in contexts like beach shots or sports. Considering the development of low-level local features and the emergence of mid-level representations, we introduce a new video descriptor, which employs local binary descriptors in conjunction with BossaNova, a recent mid-level representation. Our proposed method outperforms the state-of-the-art on the Pornography dataset.},\n  keywords = {image representation;object detection;video signal processing;pornography detection;BossaNova video descriptor;human skin detection;low-level local features;mid-level representations;local binary descriptors;Visualization;Feature extraction;Vectors;Skin;Multimedia communication;Histograms;Educational institutions;Pornography detection;binary descriptors;BossaNova representation;visual recognition},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925357.pdf},\n}\n\n
\n
\n\n\n
\n In certain environments or for certain publics, porno-graphic content may be considered inappropriate, generating the need to be detected and filtered. Most works regarding pornography detection are based on the detection of human skin. However, a shortcoming of these kind of approaches is related to the high false positive rate in contexts like beach shots or sports. Considering the development of low-level local features and the emergence of mid-level representations, we introduce a new video descriptor, which employs local binary descriptors in conjunction with BossaNova, a recent mid-level representation. Our proposed method outperforms the state-of-the-art on the Pornography dataset.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A texton for fast and flexible Gaussian texture synthesis.\n \n \n \n \n\n\n \n Galerne, B.; Leclaire, A.; and Moisan, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1686-1690, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952617,\n  author = {B. Galerne and A. Leclaire and L. Moisan},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A texton for fast and flexible Gaussian texture synthesis},\n  year = {2014},\n  pages = {1686-1690},\n  abstract = {Gaussian textures can be easily simulated by convolving an image sample with a white noise. However, this procedure is not very flexible (it does not allow for non-uniform grids in particular), and becomes computationally heavy for very large domains. We here propose an algorithm that summarizes a texture sample into a synthesis-oriented texton, that is, a small image for which the discrete spot noise simulation (summed and normalized randomly-shifted copies of the texton) is more efficient than the classical convolution algorithm. Using this synthesis-oriented texture summary, Gaussian textures can be generated on-demand in a faster, simpler, and more flexible way.},\n  keywords = {convolution;Gaussian processes;image texture;white noise;classical convolution algorithm;normalized randomly-shifted texton copy;discrete spot noise simulation;synthesis-oriented texton;white noise;flexible Gaussian texture synthesis;fast Gaussian texture synthesis;Computational modeling;Noise;Kernel;Approximation methods;Approximation algorithms;Convolution;Convergence;Spot noise;texton;Gaussian texture;texture synthesis;error reduction algorithm},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925559.pdf},\n}\n\n
\n
\n\n\n
\n Gaussian textures can be easily simulated by convolving an image sample with a white noise. However, this procedure is not very flexible (it does not allow for non-uniform grids in particular), and becomes computationally heavy for very large domains. We here propose an algorithm that summarizes a texture sample into a synthesis-oriented texton, that is, a small image for which the discrete spot noise simulation (summed and normalized randomly-shifted copies of the texton) is more efficient than the classical convolution algorithm. Using this synthesis-oriented texture summary, Gaussian textures can be generated on-demand in a faster, simpler, and more flexible way.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Determination of retinal network skeleton through mathematical morphology.\n \n \n \n \n\n\n \n Morales, S.; Naranjo, V.; Angulo, J.; López-Mir, F.; and Alcañiz, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1691-1695, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DeterminationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952618,\n  author = {S. Morales and V. Naranjo and J. Angulo and F. López-Mir and M. Alcañiz},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Determination of retinal network skeleton through mathematical morphology},\n  year = {2014},\n  pages = {1691-1695},\n  abstract = {This paper describes a new approach to determine vascular skeleton in retinal images. This approach is based on mathematical morphology along with curvature evaluation. In particular, a variant of the watershed transformation, the stochastic watershed, is applied to extract the vessel center-line. Its goal is to obtain directly the skeleton of the retinal tree avoiding a previous stage of vessel segmentation in order to reduce the dependence between stages and the computational cost. Experimental results show qualitative improvements if the proposed method is compared with other state-of-the-art algorithms, above all on pathological images. Therefore, the result of this work is an efficient and effective vessel centerline extraction algorithm and can be useful for further applications and image-aided diagnosis systems.},\n  keywords = {image segmentation;mathematical morphology;medical image processing;retinal recognition;retinal network skeleton determination;mathematical morphology;retinal vascular skeleton determination;retinal images;curvature evaluation;stochastic watershed transformation;vessel segmentation;pathological images;vessel centerline extraction algorithm;image-aided diagnosis systems;Skeleton;Image segmentation;Retina;Morphology;Databases;Biomedical imaging;Diseases;Retinal vascular skeleton;vessel centerline;mathematical morphology;curvature evaluation;stochastic watershed},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922109.pdf},\n}\n\n
\n
\n\n\n
\n This paper describes a new approach to determine vascular skeleton in retinal images. This approach is based on mathematical morphology along with curvature evaluation. In particular, a variant of the watershed transformation, the stochastic watershed, is applied to extract the vessel center-line. Its goal is to obtain directly the skeleton of the retinal tree avoiding a previous stage of vessel segmentation in order to reduce the dependence between stages and the computational cost. Experimental results show qualitative improvements if the proposed method is compared with other state-of-the-art algorithms, above all on pathological images. Therefore, the result of this work is an efficient and effective vessel centerline extraction algorithm and can be useful for further applications and image-aided diagnosis systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nonlinear band-pass filtering using the TV transform.\n \n \n \n \n\n\n \n Gilboa, G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1696-1700, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"NonlinearPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952619,\n  author = {G. Gilboa},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Nonlinear band-pass filtering using the TV transform},\n  year = {2014},\n  pages = {1696-1700},\n  abstract = {A distinct family of nonlinear filters is presented. It is based on a new formalism, defining a nonlinear transform based on the TV-functional. Scales in this sense are related to the size of the object and its contrast. Edges are very well preserved and selected scales of the object can be either selected, removed or enhanced. We compare the behavior of the filter to other filters based on Fourier and wavelets transforms and present its unique qualities.},\n  keywords = {band-pass filters;nonlinear filters;transforms;nonlinear band pass filtering;TV transform;nonlinear transform;TV functional;selected scales;wavelets transforms;Fourier transforms;Band-pass filters;TV;Feature extraction;Wavelet transforms;Total variation;TV transform;spectral TV;nonlinear filtering},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925477.pdf},\n}\n\n
\n
\n\n\n
\n A distinct family of nonlinear filters is presented. It is based on a new formalism, defining a nonlinear transform based on the TV-functional. Scales in this sense are related to the size of the object and its contrast. Edges are very well preserved and selected scales of the object can be either selected, removed or enhanced. We compare the behavior of the filter to other filters based on Fourier and wavelets transforms and present its unique qualities.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Moving target detection in airborne MIMO radar for fluctuating target RCS model.\n \n \n \n \n\n\n \n Ghotbi, S.; Ahmadi, M.; and Sebt, M. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1701-1705, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"MovingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952620,\n  author = {S. Ghotbi and M. Ahmadi and M. A. Sebt},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Moving target detection in airborne MIMO radar for fluctuating target RCS model},\n  year = {2014},\n  pages = {1701-1705},\n  abstract = {This paper considers the problem of target detection for multiple-input multiple- output with colocated antennas on a moving airborne platform. The target's radar cross section fluctuations degrade the detection performance of the radar. In this paper, first, we introduce a spatiotemporal signal model for airborne colocated MIMO radar which handles arbitrary transmit and receive antennas placement. Then, we employ the likelihood ratio test to derive the decision rules for fluctuating and nonfluctuating targets. In the case offull knowledge of target and interference statistic characteristics, we propose two detectors for fluctuating and nonfluctuating targets. The proposed detector can be used to evaluate adaptive detectors such as Kelly detector where the interference covariance matrix is estimated using training data. Simulation results have been provided to evaluate the detection performance of the proposed detectors.},\n  keywords = {airborne radar;decision theory;MIMO radar;object detection;radar antennas;radar cross-sections;radar detection;radar interference;receiving antennas;statistical testing;transmitting antennas;moving target detection;moving airborne MIMO radar platform;target RCS model fluctuation;multiple-input multiple-output radar;colocated antennas;target radar cross section fluctuations;radar detection performance degradation;spatiotemporal signal model;airborne colocated MIMO radar;receive antenna placement;arbitrary transmit antenna placement;likelihood ratio test;decision rules;nonfluctuating target detector;fluctuating target detector;interference statistic characteristics;Kelly detector;adaptive detector evaluation;interference covariance matrix;training data;Detectors;MIMO radar;Interference;Radar cross-sections;Radar antennas;Covariance matrices;Airborne MIMO radar;fluctuating target;likelihood ratio test (LRT);signal detection},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569918231.pdf},\n}\n\n
\n
\n\n\n
\n This paper considers the problem of target detection for multiple-input multiple- output with colocated antennas on a moving airborne platform. The target's radar cross section fluctuations degrade the detection performance of the radar. In this paper, first, we introduce a spatiotemporal signal model for airborne colocated MIMO radar which handles arbitrary transmit and receive antennas placement. Then, we employ the likelihood ratio test to derive the decision rules for fluctuating and nonfluctuating targets. In the case offull knowledge of target and interference statistic characteristics, we propose two detectors for fluctuating and nonfluctuating targets. The proposed detector can be used to evaluate adaptive detectors such as Kelly detector where the interference covariance matrix is estimated using training data. Simulation results have been provided to evaluate the detection performance of the proposed detectors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic WH-based edge detector in Weibull clutter.\n \n \n \n \n\n\n \n Chabbi, S.; Laroussi, T.; and Mezache, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1706-1710, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952621,\n  author = {S. Chabbi and T. Laroussi and A. Mezache},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic WH-based edge detector in Weibull clutter},\n  year = {2014},\n  pages = {1706-1710},\n  abstract = {Assuming a non-stationary Weibull background with no prior knowledge about the presence or not of a clutter edge, we propose and analyze the censoring and detection performances of the automatic censoring Weber-Haykin constant false censoring and alarm rates (ACWH-CFCAR) detector in homogeneous clutter and in the presence of a clutter edge within the reference window. The cfcarness property is assured by use of the Weber-Haykin (WH) adaptive thresholding which bypasses the estimation of the distribution parameters. The censoring algorithm starts up by considering the two most left ranked cells and proceeds forward. The selected homogeneous set is used to estimate the unknown background level. Extensive Monte Carlo simulations show that the performances of the proposed detector are similar to those exhibited by the corresponding fixed-point censoring WH-CFAR detector.},\n  keywords = {edge detection;image segmentation;Monte Carlo methods;object detection;parameter estimation;radar clutter;radar imaging;signal detection;Weibull distribution;automatic WH-based edge detector;Weibull clutter;nonstationary Weibull background;censoring performance analysis;detection performance analysis;automatic censoring Weber-Haykin detection;constant false censoring-and-alarm rates detector;ACWH-CFCAR detector;homogeneous clutter;reference window;distribution parameter estimation;unknown background level estimation;Monte Carlo simulations;fixed-point censoring WH-CFAR detector;radar signal detection systems;automatic target detection;Weber-Haykin adaptive thresholding;Clutter;Detectors;Image edge detection;Monte Carlo methods;Detection algorithms;Algorithm design and analysis;Thyristors;Weibull Clutter;Clutter Edge;Weber-Haykin Thresholding;Automatic Censoring;Automatic Detection},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569920577.pdf},\n}\n\n
\n
\n\n\n
\n Assuming a non-stationary Weibull background with no prior knowledge about the presence or not of a clutter edge, we propose and analyze the censoring and detection performances of the automatic censoring Weber-Haykin constant false censoring and alarm rates (ACWH-CFCAR) detector in homogeneous clutter and in the presence of a clutter edge within the reference window. The cfcarness property is assured by use of the Weber-Haykin (WH) adaptive thresholding which bypasses the estimation of the distribution parameters. The censoring algorithm starts up by considering the two most left ranked cells and proceeds forward. The selected homogeneous set is used to estimate the unknown background level. Extensive Monte Carlo simulations show that the performances of the proposed detector are similar to those exhibited by the corresponding fixed-point censoring WH-CFAR detector.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Source number estimation in non-Gaussian noise.\n \n \n \n \n\n\n \n Anand, G. V.; and Nagesha, P. V.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1711-1715, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SourcePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952622,\n  author = {G. V. Anand and P. V. Nagesha},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Source number estimation in non-Gaussian noise},\n  year = {2014},\n  pages = {1711-1715},\n  abstract = {In this paper a new method of source number estimation in non-Gaussian noise is presented. The proposed signal sub-space identification (SSI) method involves estimation of the array signal correlation matrix and determining the number of positive eigenvalues of the estimated correlation matrix. The SSI method is applied to the problem of estimating the number of plane wave narrowband signals impinging on a uniform linear array. It is shown that the performance of the SSI method in non-Gaussian heavy-tailed noise is significantly better than that of the widely used minimum description length (MDL) method and the recently proposed entropy estimation of eigenvalues (EEE) method based on random matrix theory.},\n  keywords = {array signal processing;correlation methods;eigenvalues and eigenfunctions;estimation theory;matrix algebra;source number estimation;nonGaussian noise;signal subspace identification method;SSI method;array signal correlation matrix estimation;positive eigenvalues;plane wave narrowband signals;uniform linear array;minimum description length method;MDL method;entropy estimation of eigenvalues method;EEE method based;random matrix theory;Estimation;Eigenvalues and eigenfunctions;Arrays;Correlation;Signal to noise ratio;Vectors;Non-Gaussian noise;noise variance estimation;signal subspace identification;source number estimation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921499.pdf},\n}\n\n
\n
\n\n\n
\n In this paper a new method of source number estimation in non-Gaussian noise is presented. The proposed signal sub-space identification (SSI) method involves estimation of the array signal correlation matrix and determining the number of positive eigenvalues of the estimated correlation matrix. The SSI method is applied to the problem of estimating the number of plane wave narrowband signals impinging on a uniform linear array. It is shown that the performance of the SSI method in non-Gaussian heavy-tailed noise is significantly better than that of the widely used minimum description length (MDL) method and the recently proposed entropy estimation of eigenvalues (EEE) method based on random matrix theory.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hybrid bayesian variational scheme to handle parameter selection in total variation signal denoising.\n \n \n \n \n\n\n \n Frecon, J.; Pustelnik, N.; Dobigeon, N.; Wendt, H.; and Abry, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1716-1720, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"HybridPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952623,\n  author = {J. Frecon and N. Pustelnik and N. Dobigeon and H. Wendt and P. Abry},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Hybrid bayesian variational scheme to handle parameter selection in total variation signal denoising},\n  year = {2014},\n  pages = {1716-1720},\n  abstract = {Change-point detection problems can be solved either by variational approaches based on total variation or by Bayesian procedures. The former class leads to small computational time but requires the choice of a regularization parameter that significantly impacts the achieved solution and whose automated selection remains a challenging problem. Bayesian strategies avoid this regularization parameter selection, at the price of high computational costs. In this contribution, we propose a hybrid Bayesian variational procedure that relies on the use of a hierarchical Bayesian model while preserving the computational efficiency of total variation optimization procedures. Behavior and performance of the proposed method compare favorably against those of a fully Bayesian approach, both in terms of accuracy and of computational time. Additionally, estimation performance are compared to the Stein unbiased risk estimate, for which the knowledge of the noise variance is needed.},\n  keywords = {belief networks;optimisation;signal denoising;variational techniques;hybrid Bayesian variational scheme;total variation signal denoising;change-point detection problems;automated parameter selection;total variation optimization procedures;Bayes methods;Signal to noise ratio;Estimation;Solids;Computational modeling;Computational efficiency;Parameter selection;total variation;convex optimization;hierarchical Bayesian model},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922049.pdf},\n}\n\n
\n
\n\n\n
\n Change-point detection problems can be solved either by variational approaches based on total variation or by Bayesian procedures. The former class leads to small computational time but requires the choice of a regularization parameter that significantly impacts the achieved solution and whose automated selection remains a challenging problem. Bayesian strategies avoid this regularization parameter selection, at the price of high computational costs. In this contribution, we propose a hybrid Bayesian variational procedure that relies on the use of a hierarchical Bayesian model while preserving the computational efficiency of total variation optimization procedures. Behavior and performance of the proposed method compare favorably against those of a fully Bayesian approach, both in terms of accuracy and of computational time. Additionally, estimation performance are compared to the Stein unbiased risk estimate, for which the knowledge of the noise variance is needed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Compressed spectrum sensing in the presence of interference: Comparison of sparse recovery strategies.\n \n \n \n\n\n \n Lagunas, E.; and Nájar, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1721-1725, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952624,\n  author = {E. Lagunas and M. Nájar},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Compressed spectrum sensing in the presence of interference: Comparison of sparse recovery strategies},\n  year = {2014},\n  pages = {1721-1725},\n  abstract = {Existing approaches to Compressive Sensing (CS) of sparse spectrum has thus far assumed models contaminated with noise (either bounded noise or Gaussian with known power). In practical Cognitive Radio (CR) networks, primary users must be detected even in the presence of low-regulated transmissions from unlicensed systems, which cannot be taken into account in the CS model because of their non-regulated nature. In [1], the authors proposed an overcomplete dictionary that contains tuned spectral shapes of the primary user to sparsely represent the primary users' spectral support, thus allowing all frequency location hypothesis to be jointly evaluated in a global unified optimization framework. Extraction of the primary user frequency locations is then performed based on sparse signal recovery algorithms. Here, we compare different sparse reconstruction strategies and we show through simulation results the link between the interference rejection capabilities and the positive semidefinite character of the residual autocorrelation matrix.},\n  keywords = {cognitive radio;compressed sensing;interference suppression;radio spectrum management;signal detection;residual autocorrelation matrix;positive semidefinite character;interference rejection capabilities;sparse reconstruction;sparse signal recovery algorithms;global unified optimization framework;frequency location hypothesis;low regulated transmissions;Cognitive Radio networks;bounded noise;sparse recovery strategies;compressed spectrum sensing;Interference;Sensors;Correlation;Minimization;Signal to noise ratio;Spectral shape;Spectrum Sensing;Compressive Sensing;Interference Mitigation;Cognitive Radio},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Existing approaches to Compressive Sensing (CS) of sparse spectrum has thus far assumed models contaminated with noise (either bounded noise or Gaussian with known power). In practical Cognitive Radio (CR) networks, primary users must be detected even in the presence of low-regulated transmissions from unlicensed systems, which cannot be taken into account in the CS model because of their non-regulated nature. In [1], the authors proposed an overcomplete dictionary that contains tuned spectral shapes of the primary user to sparsely represent the primary users' spectral support, thus allowing all frequency location hypothesis to be jointly evaluated in a global unified optimization framework. Extraction of the primary user frequency locations is then performed based on sparse signal recovery algorithms. Here, we compare different sparse reconstruction strategies and we show through simulation results the link between the interference rejection capabilities and the positive semidefinite character of the residual autocorrelation matrix.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compressed sensing K-best detection for sparse multi-user communications.\n \n \n \n \n\n\n \n Knoop, B.; Monsees, F.; Bockelmann, C.; Peters-Drolshagen, D.; Paul, S.; and Dekorsy, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1726-1730, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CompressedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952625,\n  author = {B. Knoop and F. Monsees and C. Bockelmann and D. Peters-Drolshagen and S. Paul and A. Dekorsy},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Compressed sensing K-best detection for sparse multi-user communications},\n  year = {2014},\n  pages = {1726-1730},\n  abstract = {Machine-type communications are quite often of very low data rate and of sporadic nature and therefore not well-suited for nowadays high data rate cellular communication systems. Since signaling overhead must be reasonable in relation to message size, research towards joint activity and data estimation was initiated. When the detection of sporadic multi-user signals is modeled as a sparse vector recovery problem, signaling concerning node activity can be avoided as it was demonstrated in previous works. In this paper we show how well-known K-Best detection can be modified to approximately solve this finite alphabet Compressed Sensing problem. We also demonstrate that this approach is robust against parameter variations and even works in cases where fewer measurements than unknown sources are available.},\n  keywords = {cellular radio;compressed sensing;multiuser detection;sporadic multiuser signal detection;sparse multiuser communications;cellular communication systems;finite alphabet;sparse vector recovery problem;machine-type communications;K-best detection;compressed sensing;Detectors;Complexity theory;Measurement;Vectors;Signal to noise ratio;Compressed sensing;Robustness;K-Best algorithm;multi-user detection;sparse signal processing;Compressed Sensing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923053.pdf},\n}\n\n
\n
\n\n\n
\n Machine-type communications are quite often of very low data rate and of sporadic nature and therefore not well-suited for nowadays high data rate cellular communication systems. Since signaling overhead must be reasonable in relation to message size, research towards joint activity and data estimation was initiated. When the detection of sporadic multi-user signals is modeled as a sparse vector recovery problem, signaling concerning node activity can be avoided as it was demonstrated in previous works. In this paper we show how well-known K-Best detection can be modified to approximately solve this finite alphabet Compressed Sensing problem. We also demonstrate that this approach is robust against parameter variations and even works in cases where fewer measurements than unknown sources are available.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n CFAR detection of spatially distributed targets in k-distributed clutter with unknown parameters.\n \n \n \n \n\n\n \n Nouar, N.; and Farrouki, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1731-1735, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CFARPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952626,\n  author = {N. Nouar and A. Farrouki},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {CFAR detection of spatially distributed targets in k-distributed clutter with unknown parameters},\n  year = {2014},\n  pages = {1731-1735},\n  abstract = {The paper deals with Constant False Alarm Rate (CFAR) detection of spatially distributed targets embedded in K-distributed clutter with correlated texture and unknown parameters. The proposed Cell Averaging-based detector automatically selects the suitable pre-computed threshold factor in order to maintain a prescribed Probability of False Alarm pfa. The threshold factors should be computed offline through Monte Carlo simulations for different clutter parameters and correlation degrees. The online estimation procedure of clutter parameters has been implemented using Maximum Likelihood Moments approach. Performances analysis of the proposed detector assumes unknown shape and scale parameters and Multiple Dominant Scattering centers model (MDS) for spatially distributed targets.},\n  keywords = {maximum likelihood detection;method of moments;Monte Carlo methods;object detection;radar clutter;radar detection;radar resolution;radar target recognition;spatially distributed target CFAR detection;K-distributed clutter;texture correlation;cell averaging-based detector;false alarm pfa probability;threshold factor;Monte Carlo simulation;correlation degree;online estimation procedure;maximum likelihood moment approach;multiple dominant scattering center model;MDS;high resolution radar detection;constant false alarm rate detection;Clutter;Detectors;Maximum likelihood estimation;Shape;Method of moments;Table lookup;Distributed targets;CFAR detection;K-distribution;MDS},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924223.pdf},\n}\n\n
\n
\n\n\n
\n The paper deals with Constant False Alarm Rate (CFAR) detection of spatially distributed targets embedded in K-distributed clutter with correlated texture and unknown parameters. The proposed Cell Averaging-based detector automatically selects the suitable pre-computed threshold factor in order to maintain a prescribed Probability of False Alarm pfa. The threshold factors should be computed offline through Monte Carlo simulations for different clutter parameters and correlation degrees. The online estimation procedure of clutter parameters has been implemented using Maximum Likelihood Moments approach. Performances analysis of the proposed detector assumes unknown shape and scale parameters and Multiple Dominant Scattering centers model (MDS) for spatially distributed targets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distribution mixtures, a reduced-bias estimation algorithm.\n \n \n \n \n\n\n \n Paul, N.; Girard, A.; and Terré, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1736-1740, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DistributionPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952627,\n  author = {N. Paul and A. Girard and M. Terré},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Distribution mixtures, a reduced-bias estimation algorithm},\n  year = {2014},\n  pages = {1736-1740},\n  abstract = {We focus on the definition of a new optimization criteria for mixtures of distributions estimation based on an evolution of the K-Product criterion [5]. For the case of monovariate observations we show that the new proposed criterion does not have any local non-global minimizer. This property is also observed for multivariate observations. The relevance of the new K-Product criterion is theoretically studied and analyzed through simulations (in some monovariate cases). We show that for a mixture of three separate uniform components, the distance between the criterion unique minimizer and the true component expectations is less than half the components standard deviation.},\n  keywords = {estimation theory;optimisation;distribution mixtures;reduced bias estimation algorithm;optimization criteria;distributions estimation;k-product criterion;monovariate observations;multivariate observations;Standards;Vectors;Estimation;Equations;Probability density function;Classification algorithms;Mathematical model;K-means;K-products;Distribution mixtures},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924645.pdf},\n}\n\n
\n
\n\n\n
\n We focus on the definition of a new optimization criteria for mixtures of distributions estimation based on an evolution of the K-Product criterion [5]. For the case of monovariate observations we show that the new proposed criterion does not have any local non-global minimizer. This property is also observed for multivariate observations. The relevance of the new K-Product criterion is theoretically studied and analyzed through simulations (in some monovariate cases). We show that for a mixture of three separate uniform components, the distance between the criterion unique minimizer and the true component expectations is less than half the components standard deviation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Support agnostic Bayesian recovery of jointly sparse signals.\n \n \n \n \n\n\n \n Masood, M.; and Al-Naffouri, T. Y.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1741-1745, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SupportPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952628,\n  author = {M. Masood and T. Y. Al-Naffouri},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Support agnostic Bayesian recovery of jointly sparse signals},\n  year = {2014},\n  pages = {1741-1745},\n  abstract = {A matching pursuit method using a Bayesian approach is introduced for recovering a set of sparse signals with common support from a set of their measurements. This method performs Bayesian estimates of joint-sparse signals even when the distribution of active elements is not known. It utilizes only the a priori statistics of noise and the sparsity rate of the signal, which are estimated without user intervention. The method utilizes a greedy approach to determine the approximate MMSE estimate of the joint-sparse signals. Simulation results demonstrate the superiority of the proposed estimator.},\n  keywords = {Bayes methods;iterative methods;least mean squares methods;signal processing;time-frequency analysis;support agnostic Bayesian recovery;joint sparse signals;matching pursuit;Bayesian estimates;active elements;greedy approach;MMSE estimate;Vectors;Bayes methods;Matching pursuit algorithms;Greedy algorithms;Signal to noise ratio;Sparse matrices},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924941.pdf},\n}\n\n
\n
\n\n\n
\n A matching pursuit method using a Bayesian approach is introduced for recovering a set of sparse signals with common support from a set of their measurements. This method performs Bayesian estimates of joint-sparse signals even when the distribution of active elements is not known. It utilizes only the a priori statistics of noise and the sparsity rate of the signal, which are estimated without user intervention. The method utilizes a greedy approach to determine the approximate MMSE estimate of the joint-sparse signals. Simulation results demonstrate the superiority of the proposed estimator.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient spectral analysis in the missing data case using sparse ML methods.\n \n \n \n \n\n\n \n Glentis, G.; Karlsson, J.; Jakobsson, A.; and Li, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1746-1750, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952629,\n  author = {G. Glentis and J. Karlsson and A. Jakobsson and J. Li},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient spectral analysis in the missing data case using sparse ML methods},\n  year = {2014},\n  pages = {1746-1750},\n  abstract = {Given their wide applicability, several sparse high-resolution spectral estimation techniques and their implementation have been examined in the recent literature. In this work, we further the topic by examining a computationally efficient implementation of the recent SMLA algorithms in the missing data case. The work is an extension of our implementation for the uniformly sampled case, and offers a notable computational gain as compared to the alternative implementations in the missing data case.},\n  keywords = {maximum likelihood estimation;spectral analysis;missing data case;computational gain;sparse high-resolution spectral estimation;SMLA algorithms;sparse maximum likelihood methods;spectral analysis;Vectors;Zinc;Estimation;Covariance matrices;Tin;Next generation networking;Educational institutions;Spectral estimation theory and methods;Sparse Maximum Likelihood methods;fast algorithms},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924999.pdf},\n}\n\n
\n
\n\n\n
\n Given their wide applicability, several sparse high-resolution spectral estimation techniques and their implementation have been examined in the recent literature. In this work, we further the topic by examining a computationally efficient implementation of the recent SMLA algorithms in the missing data case. The work is an extension of our implementation for the uniformly sampled case, and offers a notable computational gain as compared to the alternative implementations in the missing data case.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Enhanced radar imaging via sparsity regularized 2D linear prediction.\n \n \n \n \n\n\n \n Erer, I.; Sarikaya, K.; and Bozkurt, H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1751-1755, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EnhancedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952630,\n  author = {I. Erer and K. Sarikaya and H. Bozkurt},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Enhanced radar imaging via sparsity regularized 2D linear prediction},\n  year = {2014},\n  pages = {1751-1755},\n  abstract = {ISAR imaging based on the 2D linear prediction uses the l2 norm minimization of the prediction error to obtain 2D autoregressive (AR) model coefficients. However, this approach causes many spurious peaks in the resulting image. In this study, a new ISAR imaging method based on the 2D sparse AR modeling of backscattered data is proposed. The 2D model coefficients are obtained by the l2- norm minimization of the prediction error penalized by the l1 norm of the prediction coefficient vector. The resulting 2D prediction coefficient vector is sparse, and its use yields radar images with reduced side lobes compared to the classical l2- norm minimization.},\n  keywords = {minimisation;radar imaging;synthetic aperture radar;enhanced radar imaging;sparsity regularized 2D linear prediction;2D autoregressive;AR model coefficients;ISAR imaging method;backscattered data modeling;prediction coefficient vector;side lobes;Radar imaging;Abstracts;Minimization;Indexes;Scattering;Navigation;radar imaging;autoregressive modeling;linear prediction;sparsity;regularization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925171.pdf},\n}\n\n
\n
\n\n\n
\n ISAR imaging based on the 2D linear prediction uses the l2 norm minimization of the prediction error to obtain 2D autoregressive (AR) model coefficients. However, this approach causes many spurious peaks in the resulting image. In this study, a new ISAR imaging method based on the 2D sparse AR modeling of backscattered data is proposed. The 2D model coefficients are obtained by the l2- norm minimization of the prediction error penalized by the l1 norm of the prediction coefficient vector. The resulting 2D prediction coefficient vector is sparse, and its use yields radar images with reduced side lobes compared to the classical l2- norm minimization.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A fast and accurate adaptive notch filter using a monotonically increasing gradient.\n \n \n \n \n\n\n \n Sugiura, Y.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1756-1760, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952631,\n  author = {Y. Sugiura},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A fast and accurate adaptive notch filter using a monotonically increasing gradient},\n  year = {2014},\n  pages = {1756-1760},\n  abstract = {In this paper, we propose a new adaptive notch filter algorithm to achieve the fast and accurate narrow-band noise reduction. In the proposed algorithm, we introduce the monotonically increasing function into the gradient, which provides the fast convergence far away from the optical frequency. We additionally introduce the enhancement function into the gradient to design the steepness of the gradient curve. The proposed gradient can adjust the trade off between the convergence speed and the estimation accuracy more flexibly. Several computational simulations show that the proposed algorithm can simultaneously provide fast convergence and high accurate estimation compared with the conventional NLMS algorithm.},\n  keywords = {adaptive filters;gradient methods;notch filters;adaptive notch filter;monotonically increasing gradient;narrowband noise reduction;enhancement function;Convergence;Estimation;Noise;Signal processing algorithms;Accuracy;Filtering algorithms;IIR filters;Adaptive Notch Filter;Gradient Based Algorithm;Monotonically Increasing Gradient;Fast and Accurate Estimation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925299.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a new adaptive notch filter algorithm to achieve the fast and accurate narrow-band noise reduction. In the proposed algorithm, we introduce the monotonically increasing function into the gradient, which provides the fast convergence far away from the optical frequency. We additionally introduce the enhancement function into the gradient to design the steepness of the gradient curve. The proposed gradient can adjust the trade off between the convergence speed and the estimation accuracy more flexibly. Several computational simulations show that the proposed algorithm can simultaneously provide fast convergence and high accurate estimation compared with the conventional NLMS algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An empirical eigenvalue-threshold test for sparsity level estimation from compressed measurements.\n \n \n \n \n\n\n \n Lavrenko, A.; Römer, F.; Del Galdo, G.; Thomä, R.; and Arikan, O.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1761-1765, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952632,\n  author = {A. Lavrenko and F. Römer and G. {Del Galdo} and R. Thomä and O. Arikan},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {An empirical eigenvalue-threshold test for sparsity level estimation from compressed measurements},\n  year = {2014},\n  pages = {1761-1765},\n  abstract = {Compressed sensing allows for a significant reduction of the number of measurements when the signal of interest is of a sparse nature. Most computationally efficient algorithms for signal recovery rely on some knowledge of the sparsity level, i.e., the number of non-zero elements. However, the sparsity level is often not known a priori and can even vary with time. In this contribution we show that it is possible to estimate the sparsity level directly in the compressed domain, provided that multiple independent observations are available. In fact, one can use classical model order selection algorithms for this purpose. Nevertheless, due to the influence of the measurement process they may not perform satisfactorily in the compressed sensing setup. To overcome this drawback, we propose an approach which exploits the empirical distributions of the noise eigenvalues. We demonstrate its superior performance compared to state-of-the-art model order estimation algorithms numerically.},\n  keywords = {compressed sensing;eigenvalues and eigenfunctions;empirical eigenvalue-threshold test;sparsity level estimation;compressed measurements;compressed sensing;signal recovery;multiple independent observations;model order selection algorithms;noise eigenvalues;model order estimation algorithms;Eigenvalues and eigenfunctions;Covariance matrices;Signal to noise ratio;Estimation;Noise measurement;Compressed sensing;sparsity level;detection;model order selection},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925343.pdf},\n}\n\n
\n
\n\n\n
\n Compressed sensing allows for a significant reduction of the number of measurements when the signal of interest is of a sparse nature. Most computationally efficient algorithms for signal recovery rely on some knowledge of the sparsity level, i.e., the number of non-zero elements. However, the sparsity level is often not known a priori and can even vary with time. In this contribution we show that it is possible to estimate the sparsity level directly in the compressed domain, provided that multiple independent observations are available. In fact, one can use classical model order selection algorithms for this purpose. Nevertheless, due to the influence of the measurement process they may not perform satisfactorily in the compressed sensing setup. To overcome this drawback, we propose an approach which exploits the empirical distributions of the noise eigenvalues. We demonstrate its superior performance compared to state-of-the-art model order estimation algorithms numerically.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compressive sensing with an overcomplete dictionary for high-resolution DFT analysis.\n \n \n \n \n\n\n \n Frigo, G.; and Narduzzi, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1766-1770, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CompressivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952653,\n  author = {G. Frigo and C. Narduzzi},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Compressive sensing with an overcomplete dictionary for high-resolution DFT analysis},\n  year = {2014},\n  pages = {1766-1770},\n  abstract = {The problem of resolving frequency components close to the Rayleigh threshold, while using time-domain sample sequences of length not greater than N, is relevant to several waveform monitoring applications where acquisition time is upper-bounded. The paper presents a compressive sensing (CS) algorithm that enhances frequency resolution by introducing a dictionary that explicitly accounts for spectral leakage on a fine frequency grid. The proposed algorithm achieves good estimation accuracy without significantly extending total measurement time.},\n  keywords = {compressed sensing;discrete Fourier transforms;signal resolution;time-domain analysis;compressive sensing algorithm;overcomplete dictionary;high-resolution DFT analysis;frequency components;Rayleigh threshold;time-domain sample sequences;waveform monitoring;acquisition time;CS algorithm;frequency resolution;spectral leakage;fine frequency grid;estimation accuracy;total measurement time;Signal to noise ratio;Discrete Fourier transforms;Vectors;Signal resolution;Equations;Indexes;Compressed sensing;discrete Fourier transform;spectral analysis;compressive sensing;super-resolution},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925415.pdf},\n}\n\n
\n
\n\n\n
\n The problem of resolving frequency components close to the Rayleigh threshold, while using time-domain sample sequences of length not greater than N, is relevant to several waveform monitoring applications where acquisition time is upper-bounded. The paper presents a compressive sensing (CS) algorithm that enhances frequency resolution by introducing a dictionary that explicitly accounts for spectral leakage on a fine frequency grid. The proposed algorithm achieves good estimation accuracy without significantly extending total measurement time.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reconstruction of locally frequency sparse nonstationary signals from random samples.\n \n \n \n \n\n\n \n Amin, M.; Jokanović, B.; and Dogaru, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1771-1775, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ReconstructionPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952654,\n  author = {M. Amin and B. Jokanović and T. Dogaru},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Reconstruction of locally frequency sparse nonstationary signals from random samples},\n  year = {2014},\n  pages = {1771-1775},\n  abstract = {The local sparsity property of frequency modulated (FM) signals stems from their instantaneous narrowband characteristics. This enables their reconstruction from few random signal observations over a short-time window. It is shown that for linear FM signals, the sparsity of the local frequencies is equal to the window length, thus adding another specification to the window selection requirements, beside the conventional temporal and spectral resolutions. Stable signal reconstruction within a sliding window depends on the underlying probability distribution function guiding the random sampling intervals. Both simulations and computational EM modeling data are used to demonstrate the effectiveness of local reconstructions. We consider both mono-component FM signals and multi-component signals, corresponding to maneuvering targets and human gait Doppler signatures, respectively.},\n  keywords = {frequency modulation;signal reconstruction;signal sampling;signal reconstruction;locally frequency sparse nonstationary signals;random samples;local sparsity property;frequency modulated signals;instantaneous narrowband characteristics;random signal observations;short-time window;linear FM signals;window length;window selection;temporal resolutions;spectral resolutions;sliding window;probability distribution function;random sampling intervals;monocomponent FM signals;multicomponent signals;maneuvering targets;human gait Doppler signatures;Time-frequency analysis;Frequency modulation;Image reconstruction;Chirp;Radar imaging;Compressed sensing;Local sparsity;nonstationary signals;random under-sampling;time-frequency representation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925537.pdf},\n}\n\n
\n
\n\n\n
\n The local sparsity property of frequency modulated (FM) signals stems from their instantaneous narrowband characteristics. This enables their reconstruction from few random signal observations over a short-time window. It is shown that for linear FM signals, the sparsity of the local frequencies is equal to the window length, thus adding another specification to the window selection requirements, beside the conventional temporal and spectral resolutions. Stable signal reconstruction within a sliding window depends on the underlying probability distribution function guiding the random sampling intervals. Both simulations and computational EM modeling data are used to demonstrate the effectiveness of local reconstructions. We consider both mono-component FM signals and multi-component signals, corresponding to maneuvering targets and human gait Doppler signatures, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust sparsity and clustering regularization for regression.\n \n \n \n \n\n\n \n Zeng, X.; and Figueiredo, M. A. T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1776-1780, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952655,\n  author = {X. Zeng and M. A. T. Figueiredo},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Robust sparsity and clustering regularization for regression},\n  year = {2014},\n  pages = {1776-1780},\n  abstract = {Based on our previously proposed SPARsity and Clustering (SPARC) regularization, we propose a robust variant of SPARC (RSPARC), which is able to detect observations corrupted by sparse outliers. The proposed RSPARC inherits the ability of SPARC to promote group-sparsity, and combines that ability with robustness to outliers. We propose algorithms of the alternating direction method of multipliers (ADMM) family to solve several regularization formulations involving SPARC regularization. Experiments show that RSPARC is a competitive robust group-sparsity-inducing regularization for regression.},\n  keywords = {regression analysis;signal processing;robust sparsity;clustering regularization;regression;sparsity-clustering regularization;SPARC regularization;RSPARC;observation detection;sparse outliers;alternating direction method-of-multipliers;ADMM family;regularization formulation;group-sparsity-inducing regularization;Robustness;Signal processing algorithms;Vectors;Signal processing;Input variables;Inverse problems;Conferences;Sparsity and clustering;group sparsity;Lasso;elastic net},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926549.pdf},\n}\n\n
\n
\n\n\n
\n Based on our previously proposed SPARsity and Clustering (SPARC) regularization, we propose a robust variant of SPARC (RSPARC), which is able to detect observations corrupted by sparse outliers. The proposed RSPARC inherits the ability of SPARC to promote group-sparsity, and combines that ability with robustness to outliers. We propose algorithms of the alternating direction method of multipliers (ADMM) family to solve several regularization formulations involving SPARC regularization. Experiments show that RSPARC is a competitive robust group-sparsity-inducing regularization for regression.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Speech recognition of multiple accented English data using acoustic model interpolation.\n \n \n \n \n\n\n \n Fraga-Silva, T.; Gauvain, J.; and Lamel, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1781-1785, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SpeechPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952656,\n  author = {T. Fraga-Silva and J. Gauvain and L. Lamel},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Speech recognition of multiple accented English data using acoustic model interpolation},\n  year = {2014},\n  pages = {1781-1785},\n  abstract = {In a previous work [1], we have shown that model interpolation can be applied for acoustic model adaptation for a specific show. Compared to other approaches, this method has the advantage to be highly flexible, allowing rapid adaptation by simply reassigning the interpolation coefficients. In this work this approach is used for a multi-accented English broadcast news data recognition, which can be considered an arduous task due to the impact of accent variability on the recognition performance. The work described in [1] is extended in two ways. First, in order to reduce the parameters of the interpolated model, a theoretically motivated EM-like mixture reduction algorithm is proposed. Second, beyond supervised adaptation, model interpolation is used as an unsupervised adaptation framework, where the interpolation coefficients are estimated on-the-fly for each test segment.},\n  keywords = {expectation-maximisation algorithm;interpolation;speech recognition;unsupervised learning;unsupervised adaptation framework;EM-like mixture reduction algorithm;English broadcast news data recognition;acoustic model interpolation;multiple accented English data;speech recognition;Adaptation models;Interpolation;Training;Speech recognition;Hidden Markov models;Speech;Acoustics;Model interpolation;supervised and unsupervised adaptation;multi-accented data},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925951.pdf},\n}\n\n
\n
\n\n\n
\n In a previous work [1], we have shown that model interpolation can be applied for acoustic model adaptation for a specific show. Compared to other approaches, this method has the advantage to be highly flexible, allowing rapid adaptation by simply reassigning the interpolation coefficients. In this work this approach is used for a multi-accented English broadcast news data recognition, which can be considered an arduous task due to the impact of accent variability on the recognition performance. The work described in [1] is extended in two ways. First, in order to reduce the parameters of the interpolated model, a theoretically motivated EM-like mixture reduction algorithm is proposed. Second, beyond supervised adaptation, model interpolation is used as an unsupervised adaptation framework, where the interpolation coefficients are estimated on-the-fly for each test segment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Acoustic model selection using limited data for accent robust speech recognition.\n \n \n \n \n\n\n \n Najafian, M.; Safavi, S.; Hanani, A.; and Russell, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1786-1790, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AcousticPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952657,\n  author = {M. Najafian and S. Safavi and A. Hanani and M. Russell},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Acoustic model selection using limited data for accent robust speech recognition},\n  year = {2014},\n  pages = {1786-1790},\n  abstract = {This paper investigates techniques to compensate for the effects of regional accents of British English on automatic speech recognition (ASR) performance. Given a small amount of speech from a new speaker, is it better to apply speaker adaptation, or to use accent identification (AID) to identify the speaker's accent followed by accent-dependent ASR? Three approaches to accent-dependent modelling are investigated: using the `correct' accent model, choosing a model using supervised (ACCDIST-based) accent identification (AID), and building a model using data from neighbouring speakers in `AID space'. All of the methods outperform the accent-independent model, with relative reductions in ASR error rate of up to 44%. Using on average 43s of speech to identify an appropriate accent-dependent model outperforms using it for supervised speaker-adaptation, by 7%.},\n  keywords = {acoustic signal processing;learning (artificial intelligence);speaker recognition;acoustic model selection;limited data;accent robust speech recognition;regional accent effect;British English;automatic speech recognition performance;ASR performance;accent-dependent ASR;accent-dependent modelling;correct accent model;supervised ACCDIST-based accent identification model;neighbouring speakers;AID space;supervised speaker-adaptation;Speech;Adaptation models;Speech recognition;Hidden Markov models;Data models;Acoustics;Error analysis;speech recognition;acoustic data selection;accent identification},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926747.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates techniques to compensate for the effects of regional accents of British English on automatic speech recognition (ASR) performance. Given a small amount of speech from a new speaker, is it better to apply speaker adaptation, or to use accent identification (AID) to identify the speaker's accent followed by accent-dependent ASR? Three approaches to accent-dependent modelling are investigated: using the `correct' accent model, choosing a model using supervised (ACCDIST-based) accent identification (AID), and building a model using data from neighbouring speakers in `AID space'. All of the methods outperform the accent-independent model, with relative reductions in ASR error rate of up to 44%. Using on average 43s of speech to identify an appropriate accent-dependent model outperforms using it for supervised speaker-adaptation, by 7%.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust speech recognition using warped DFT-based cepstral features in clean and multistyle training.\n \n \n \n \n\n\n \n Alam, M. J.; Kenny, P.; Dumouchel, P.; and O'Shaughnessy, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1791-1795, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952658,\n  author = {M. J. Alam and P. Kenny and P. Dumouchel and D. O'Shaughnessy},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Robust speech recognition using warped DFT-based cepstral features in clean and multistyle training},\n  year = {2014},\n  pages = {1791-1795},\n  abstract = {This paper investigates the robustness of the warped discrete Fourier transform (WDFT)-based cepstral features for continuous speech recognition under clean and multistyle training conditions. In the MFCC and PLP front-ends, in order to approximate the nonlinear characteristics of the human auditory system in frequency, the speech spectrum is warped using the Mel-scale filterbank, which typically consists of overlapping triangular filters. It is well known that such nonlinear frequency transformation-based features provide better speech recognition accuracy than linear frequency scale features. It has been found that warping the DFT spectrum directly, rather than using filterbank averaging, provides a more precise approximation to the perceptual scales. WDFT provides non-uniform resolution filter-banks whereas DFT provides uniform resolution filter-banks. Here, we provide a performance evaluation of the following variants of the warped cepstral features: WDFT, and WDFT-linear prediction-based MFCC features. Experiments were carried out on the AURORA-4 task. Experimental results demonstrate that the WDFT-based cepstral features outperform the conventional MFCC and PLP both in clean and multistyle training conditions in terms of recognition error rates.},\n  keywords = {channel bank filters;discrete Fourier transforms;speech recognition;robust speech recognition;warped DFT based cepstral features;clean training;multistyle training;warped discrete Fourier transform;MFCC front end;PLP front end;human auditory system nonlinear characteristics;Mel-scale filter bank;perceptual scale;AURORA-4 task;Mel frequency cepstral coefficient;Speech;Speech recognition;Feature extraction;Discrete Fourier transforms;Training;Warped DFT;speech recognition;multi-style training;spectrum enhancement;linear prediction},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926727.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates the robustness of the warped discrete Fourier transform (WDFT)-based cepstral features for continuous speech recognition under clean and multistyle training conditions. In the MFCC and PLP front-ends, in order to approximate the nonlinear characteristics of the human auditory system in frequency, the speech spectrum is warped using the Mel-scale filterbank, which typically consists of overlapping triangular filters. It is well known that such nonlinear frequency transformation-based features provide better speech recognition accuracy than linear frequency scale features. It has been found that warping the DFT spectrum directly, rather than using filterbank averaging, provides a more precise approximation to the perceptual scales. WDFT provides non-uniform resolution filter-banks whereas DFT provides uniform resolution filter-banks. Here, we provide a performance evaluation of the following variants of the warped cepstral features: WDFT, and WDFT-linear prediction-based MFCC features. Experiments were carried out on the AURORA-4 task. Experimental results demonstrate that the WDFT-based cepstral features outperform the conventional MFCC and PLP both in clean and multistyle training conditions in terms of recognition error rates.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Class-dependent two-dimensional linear discriminant analysis using two-pass recognition strategy.\n \n \n \n \n\n\n \n Viszlay, P.; Lojka, M.; and Juhár, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1796-1800, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Class-dependentPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952659,\n  author = {P. Viszlay and M. Lojka and J. Juhár},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Class-dependent two-dimensional linear discriminant analysis using two-pass recognition strategy},\n  year = {2014},\n  pages = {1796-1800},\n  abstract = {In this paper, we introduce a novel class-dependent extension of two-dimensional linear discriminant analysis (2DLDA) named CD-2DLDA, applied in automatic speech recognition using two-pass recognition strategy. In the first pass, the class labels of test sample are obtained using baseline recognition. The labels are then used in CD transformation of test features. In the second pass, recognition of previously transformed test samples is performed using CD-2DLDA acoustic model. The novelty of the paper lies in improvement of the present 2DLDA algorithm by its modification to more precise, class-dependent estimations repeated separately for each class. The proposed approach is evaluated in several scenarios using the TIMIT corpus in phoneme-based continuous speech recognition task. CD-2DLDA features are compared to state-of-the-art MFCCs, conventional LDA and 2DLDA features. The experimental results show that our method performs better than MFCCs and LDA. Furthermore, the results confirm that CD-2DLDA markedly outperforms the 2DLDA method.},\n  keywords = {speech recognition;statistical analysis;phoneme-based continuous speech recognition task;TIMIT corpus;CD-2DLDA acoustic model;CD transformation;baseline recognition;automatic speech recognition;two-pass recognition strategy;novel class-dependent two-dimensional linear discriminant analysis;Speech recognition;Training;Hidden Markov models;Acoustics;Vectors;Speech;Transforms;class-dependent transformation;discriminant analysis;scatter matrix;time alignment},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926695.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we introduce a novel class-dependent extension of two-dimensional linear discriminant analysis (2DLDA) named CD-2DLDA, applied in automatic speech recognition using two-pass recognition strategy. In the first pass, the class labels of test sample are obtained using baseline recognition. The labels are then used in CD transformation of test features. In the second pass, recognition of previously transformed test samples is performed using CD-2DLDA acoustic model. The novelty of the paper lies in improvement of the present 2DLDA algorithm by its modification to more precise, class-dependent estimations repeated separately for each class. The proposed approach is evaluated in several scenarios using the TIMIT corpus in phoneme-based continuous speech recognition task. CD-2DLDA features are compared to state-of-the-art MFCCs, conventional LDA and 2DLDA features. The experimental results show that our method performs better than MFCCs and LDA. Furthermore, the results confirm that CD-2DLDA markedly outperforms the 2DLDA method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evaluation of speech enhancement based on pre-image iterations using automatic speech recognition.\n \n \n \n \n\n\n \n Leitner, C.; Morales-Cordovilla, J. A.; and Pernkopf, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1801-1805, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EvaluationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952660,\n  author = {C. Leitner and J. A. Morales-Cordovilla and F. Pernkopf},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Evaluation of speech enhancement based on pre-image iterations using automatic speech recognition},\n  year = {2014},\n  pages = {1801-1805},\n  abstract = {Recently, we developed pre-image iteration methods for single-channel speech enhancement. We used objective quality measures for evaluation. In this paper, we evaluate the de-noising capabilities of pre-image iterations using an automatic speech recognizer trained on clean speech data. In particular, we provide the word recognition accuracy of the de-noised utterances using white and car noise at 0, 5, 10, and 15 dB signal-to-noise ratio (SNR). Empirical results show that the utterances processed by pre-image iterations achieve a consistently better word recognition accuracy for both noise types and all SNR levels compared to the noisy data and the utterances processed by the generalized subspace speech enhancement method.},\n  keywords = {image denoising;image recognition;iterative methods;speech enhancement;speech recognition;white noise;speech enhancement evaluation;automatic speech recognition;pre-image iteration methods;single-channel speech enhancement;objective quality measures;pre-image iteration de-noising capability;clean speech data;car noise;white noise;signal-to-noise ratio;SNR;word recognition accuracy;generalized subspace speech enhancement method;Speech;Speech recognition;Speech enhancement;Kernel;Noise measurement;Signal to noise ratio;Speech enhancement;speech de-noising;pre-image iterations;automatic speech recognition},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569911953.pdf},\n}\n\n
\n
\n\n\n
\n Recently, we developed pre-image iteration methods for single-channel speech enhancement. We used objective quality measures for evaluation. In this paper, we evaluate the de-noising capabilities of pre-image iterations using an automatic speech recognizer trained on clean speech data. In particular, we provide the word recognition accuracy of the de-noised utterances using white and car noise at 0, 5, 10, and 15 dB signal-to-noise ratio (SNR). Empirical results show that the utterances processed by pre-image iterations achieve a consistently better word recognition accuracy for both noise types and all SNR levels compared to the noisy data and the utterances processed by the generalized subspace speech enhancement method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Semi-local total variation for regularization of inverse problems.\n \n \n \n \n\n\n \n Condat, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1806-1810, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Semi-localPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952661,\n  author = {L. Condat},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Semi-local total variation for regularization of inverse problems},\n  year = {2014},\n  pages = {1806-1810},\n  abstract = {We propose the discrete semi-local total variation (SLTV) as a new regularization functional for inverse problems in imaging. The SLTV favors piecewise linear images; so the main drawback of the total variation (TV), its clustering effect, is avoided. Recently proposed primal-dual methods allow to solve the corresponding optimization problems as easily and efficiently as with the classical TV.},\n  keywords = {image reconstruction;inverse problems;minimisation;pattern clustering;inverse problem regularization;discrete semilocal total variation;SLTV;piecewise linear images;TV;clustering effect;primal-dual methods;optimization problems;TV;Inverse problems;Convex functions;Image reconstruction;Imaging;Signal processing algorithms;Minimization;total variation;non-local regularization;inverse problem;convex optimization;proximal method},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922245.pdf},\n}\n\n
\n
\n\n\n
\n We propose the discrete semi-local total variation (SLTV) as a new regularization functional for inverse problems in imaging. The SLTV favors piecewise linear images; so the main drawback of the total variation (TV), its clustering effect, is avoided. Recently proposed primal-dual methods allow to solve the corresponding optimization problems as easily and efficiently as with the classical TV.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A hybrid alternating proximal method for blind video restoration.\n \n \n \n \n\n\n \n Abboud, F.; Chouzenoux, E.; Pesquet, J. -.; Chenot, J. -.; and Laborelli, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1811-1815, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952662,\n  author = {F. Abboud and E. Chouzenoux and J. -. Pesquet and J. -. Chenot and L. Laborelli},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A hybrid alternating proximal method for blind video restoration},\n  year = {2014},\n  pages = {1811-1815},\n  abstract = {Old analog television sequences suffer from a number of degradations. Some of them can be modeled through convolution with a kernel and an additive noise term. In this work, we propose a new blind deconvolution algorithm for the restoration of such sequences based on a variational formulation of the problem. Our method accounts for motion between frames, while enforcing some level of temporal continuity through the use of a novel penalty function involving optical flow operators, in addition to an edge-preserving regularization. The optimization process is performed by a proximal alternating minimization scheme benefiting from theoretical convergence guarantees. Simulation results on synthetic and real video sequences confirm the effectiveness of our method.},\n  keywords = {deconvolution;image restoration;image sequences;hybrid alternating proximal method;blind video restoration;blind deconvolution algorithm;temporal continuity;penalty function;optical flow operators;edge-preserving regularization;video sequences;Kernel;Image restoration;Minimization;Deconvolution;Video sequences;Convergence;Optimization;Blind deconvolution;video processing;optical flow;proximal algorithms;convex optimization;regularization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925381.pdf},\n}\n\n
\n
\n\n\n
\n Old analog television sequences suffer from a number of degradations. Some of them can be modeled through convolution with a kernel and an additive noise term. In this work, we propose a new blind deconvolution algorithm for the restoration of such sequences based on a variational formulation of the problem. Our method accounts for motion between frames, while enforcing some level of temporal continuity through the use of a novel penalty function involving optical flow operators, in addition to an edge-preserving regularization. The optimization process is performed by a proximal alternating minimization scheme benefiting from theoretical convergence guarantees. Simulation results on synthetic and real video sequences confirm the effectiveness of our method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Voting based automatic exudate detection in color fundus photographs.\n \n \n \n \n\n\n \n Prentašić, P.; and Lončarić, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1816-1820, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"VotingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952663,\n  author = {P. Prentašić and S. Lončarić},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Voting based automatic exudate detection in color fundus photographs},\n  year = {2014},\n  pages = {1816-1820},\n  abstract = {Diabetic retinopathy is one of the leading causes of preventable blindness. Screening programs using color fundus photographs enable early diagnosis of diabetic retinopathy, which enables timely treatment of the disease. Exudate detection algorithms are important for development of automatic screening systems and in this paper we present a method for detection of exudate regions in color fundus photographs. The method combines different preprocessing and candidate extraction algorithms to increase the exudate detection accuracy. First, we form an ensemble of different candidate extraction algorithms, which are used to increase the accuracy. After extracting the potential exudate regions we apply machine learning based classification for detection of exudate regions. For experimental validation we use the DRiDB color fundus image set where the presented method achieves higher accuracy in comparison to other state-of-the art methods.},\n  keywords = {colour photography;diseases;feature extraction;image classification;image colour analysis;learning (artificial intelligence);medical image processing;voting based automatic exudate detection;color fundus photographs;diabetic retinopathy diagnosis;preventable blindness;screening programs;exudate detection algorithms;disease treatment;automatic screening systems;candidate extraction algorithms;machine learning based classification;DRiDB color fundus image set;Diabetes;Image color analysis;Retinopathy;Standards;Feature extraction;Biomedical imaging;Retina;diabetic retinopathy;exudate detection;machine learning;ensemble;image processing and analysis},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924997.pdf},\n}\n\n
\n
\n\n\n
\n Diabetic retinopathy is one of the leading causes of preventable blindness. Screening programs using color fundus photographs enable early diagnosis of diabetic retinopathy, which enables timely treatment of the disease. Exudate detection algorithms are important for development of automatic screening systems and in this paper we present a method for detection of exudate regions in color fundus photographs. The method combines different preprocessing and candidate extraction algorithms to increase the exudate detection accuracy. First, we form an ensemble of different candidate extraction algorithms, which are used to increase the accuracy. After extracting the potential exudate regions we apply machine learning based classification for detection of exudate regions. For experimental validation we use the DRiDB color fundus image set where the presented method achieves higher accuracy in comparison to other state-of-the art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Retinal blood vessel extraction using curvelet transform and conditional fuzzy entropy.\n \n \n \n \n\n\n \n Kar, S. S.; Maity, S. P.; and Delpha, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1821-1825, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RetinalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952664,\n  author = {S. S. Kar and S. P. Maity and C. Delpha},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Retinal blood vessel extraction using curvelet transform and conditional fuzzy entropy},\n  year = {2014},\n  pages = {1821-1825},\n  abstract = {This work employs multiple thresholds on matched filter response for automatic extraction of blood vessels, specially from a low contrast and non-uniformly illuminated background of retina. Curvelet transform is used first to enhance the finest details along the vessels followed by matched filtering to intensify the blood vessels' response. The conditional fuzzy entropy is then maximized to find the set of optimal thresholds to extract different types of vessel silhouettes from the background. Differential Evolution algorithm is used to specify the optimal combination of the fuzzy parameters. Thresholds thus obtained extract the thin, the medium and the thick vessels from the enhanced image which are then logically OR-ed to obtain the entire vascular tree. Performance evaluated on publicly available DRIVE database is compared with the existing blood vessel extraction methods. Experimental runs demonstrate that the proposed method outperforms the existing methods in detecting various types of vessels.},\n  keywords = {blood vessels;curvelet transforms;entropy;evolutionary computation;feature extraction;fuzzy set theory;image enhancement;matched filters;medical image processing;retinal recognition;DRIVE database;vascular tree;logically OR-ed;image enhancement;fuzzy parameters;differential evolution algorithm;vessel silhouettes;nonuniform illuminated background;low contrast illuminated background;matched filter response;multiple thresholds;conditional fuzzy entropy;curvelet transform;automatic retinal blood vessel extraction method;Biomedical imaging;Retina;Blood vessels;Entropy;Image edge detection;Transforms;Image segmentation;Retinal vessel segmentation;Curvelet;Matched filter;Conditional Fuzzy Entropy;Differential Evolution},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925099.pdf},\n}\n\n
\n
\n\n\n
\n This work employs multiple thresholds on matched filter response for automatic extraction of blood vessels, specially from a low contrast and non-uniformly illuminated background of retina. Curvelet transform is used first to enhance the finest details along the vessels followed by matched filtering to intensify the blood vessels' response. The conditional fuzzy entropy is then maximized to find the set of optimal thresholds to extract different types of vessel silhouettes from the background. Differential Evolution algorithm is used to specify the optimal combination of the fuzzy parameters. Thresholds thus obtained extract the thin, the medium and the thick vessels from the enhanced image which are then logically OR-ed to obtain the entire vascular tree. Performance evaluated on publicly available DRIVE database is compared with the existing blood vessel extraction methods. Experimental runs demonstrate that the proposed method outperforms the existing methods in detecting various types of vessels.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bone microstructure reconstructions from few projections with stochastic nonlinear diffusion.\n \n \n \n \n\n\n \n Wang, L.; SIxou, B.; and Peyriny, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1826-1830, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"BonePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952665,\n  author = {L. Wang and B. SIxou and F. Peyriny},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Bone microstructure reconstructions from few projections with stochastic nonlinear diffusion},\n  year = {2014},\n  pages = {1826-1830},\n  abstract = {In this work, we use a stochastic diffusion equation for the reconstruction of binary tomography cross-sections obtained from a small number of projections. The aim of this new method is to escape from local minima by changing the shape of the boundaries of the image. First, an initial binary image is reconstructed with a deterministic Total Variation regularization method, and then this binary reconstructed image is refined by a stochastic partial differential equation with singular diffusivity and a gradient dependent noise. This method is tested on a 256 × 256 experimental micro-CT trabecular bone image with different additive Gaussian noises. The reconstruction images are clearly improved.},\n  keywords = {bone;computerised tomography;Gaussian noise;image reconstruction;medical image processing;partial differential equations;bone microstructure reconstructions;few projections;stochastic nonlinear diffusion;binary tomography cross-sections;binary image reconstruction;deterministic total variation regularization;stochastic partial differential equation;singular diffusivity;gradient dependent noise;microCT trabecular bone image;additive Gaussian noises;TV;Image reconstruction;Noise;Tomography;Equations;Optimization;Bones;X-ray imaging;TV regularization;binary tomography;bone microstructure},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925519.pdf},\n}\n\n
\n
\n\n\n
\n In this work, we use a stochastic diffusion equation for the reconstruction of binary tomography cross-sections obtained from a small number of projections. The aim of this new method is to escape from local minima by changing the shape of the boundaries of the image. First, an initial binary image is reconstructed with a deterministic Total Variation regularization method, and then this binary reconstructed image is refined by a stochastic partial differential equation with singular diffusivity and a gradient dependent noise. This method is tested on a 256 × 256 experimental micro-CT trabecular bone image with different additive Gaussian noises. The reconstruction images are clearly improved.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Gridless compressive-sensing methods for frequency estimation: Points of tangency and links to basics.\n \n \n \n\n\n \n Stoica, P.; Tangy, G.; Yang, Z.; and Zachariah, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1831-1835, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952666,\n  author = {P. Stoica and G. Tangy and Z. Yang and D. Zachariah},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Gridless compressive-sensing methods for frequency estimation: Points of tangency and links to basics},\n  year = {2014},\n  pages = {1831-1835},\n  abstract = {The gridless compressive-sensing methods form the most recent class of approaches that have been proposed for estimating the frequencies of sinusoidal signals from noisy measurements. In this paper we review these methods with the main goal of providing new insights into the relationships between them and their links to the basic approach of nonlinear least squares (NLS).We show that a convex relaxation of penalized NLS leads to the atomic-norm minimization method. This method in turn can be approximated by a gridless version of the SPICE method, for which the dual problem is shown to be equivalent to the global matched filter method.},\n  keywords = {compressed sensing;frequency estimation;least squares approximations;matched filters;gridless compressive-sensing method;sinusoidal signal frequency estimation;noisy measurements;nonlinear least squares;NLS;convex relaxation;atomic-norm minimization method;SPICE method;matched filter method;Frequency estimation;SPICE;Estimation;Vectors;Minimization;Covariance matrices;Educational institutions;frequency estimation;sparse signal processing;covariance estimation},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The gridless compressive-sensing methods form the most recent class of approaches that have been proposed for estimating the frequencies of sinusoidal signals from noisy measurements. In this paper we review these methods with the main goal of providing new insights into the relationships between them and their links to the basic approach of nonlinear least squares (NLS).We show that a convex relaxation of penalized NLS leads to the atomic-norm minimization method. This method in turn can be approximated by a gridless version of the SPICE method, for which the dual problem is shown to be equivalent to the global matched filter method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Piecewise Toeplitz matrices-based sensing for rank minimization.\n \n \n \n \n\n\n \n Li, K.; Rojas, C. R.; Chatterjee, S.; and Hjalmarsson, H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1836-1840, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PiecewisePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952667,\n  author = {K. Li and C. R. Rojas and S. Chatterjee and H. Hjalmarsson},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Piecewise Toeplitz matrices-based sensing for rank minimization},\n  year = {2014},\n  pages = {1836-1840},\n  abstract = {This paper proposes a set of piecewise Toeplitz matrices as the linear mapping/sensing operator A : Rn1×n2 → RM for recovering low rank matrices from few measurements. We prove that such operators efficiently encode the information so there exists a unique reconstruction matrix under mild assumptions. This work provides a significant extension of the compressed sensing and rank minimization theory, and it achieves a tradeoff between reducing the memory required for storing the sampling operator from O(n1n2M) to O(max(n1, n2)M) but at the expense of increasing the number of measurements by r. Simulation results show that the proposed operator can recover low rank matrices efficiently with a reconstruction performance close to the cases of using random unstructured operators.},\n  keywords = {minimisation;Toeplitz matrices;rank minimization theory;compressed sensing;matrix reconstruction;linear mapping-sensing operator;rank minimization;piecewise Toeplitz matrices;Sparse matrices;Matrix decomposition;Sensors;Compressed sensing;Minimization;Vectors;Indexes;Rank minimization;Toeplitz matrix;compressed sensing;coherence},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923131.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a set of piecewise Toeplitz matrices as the linear mapping/sensing operator A : Rn1×n2 → RM for recovering low rank matrices from few measurements. We prove that such operators efficiently encode the information so there exists a unique reconstruction matrix under mild assumptions. This work provides a significant extension of the compressed sensing and rank minimization theory, and it achieves a tradeoff between reducing the memory required for storing the sampling operator from O(n1n2M) to O(max(n1, n2)M) but at the expense of increasing the number of measurements by r. Simulation results show that the proposed operator can recover low rank matrices efficiently with a reconstruction performance close to the cases of using random unstructured operators.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Combined modeling of sparse and dense noise improves Bayesian RVM.\n \n \n \n \n\n\n \n Sundin, M.; Chatterjee, S.; and Jansson, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1841-1845, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CombinedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952668,\n  author = {M. Sundin and S. Chatterjee and M. Jansson},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Combined modeling of sparse and dense noise improves Bayesian RVM},\n  year = {2014},\n  pages = {1841-1845},\n  abstract = {Using a Bayesian approach, we consider the problem of recovering sparse signals under additive sparse and dense noise. Typically, sparse noise models outliers, impulse bursts or data loss. To handle sparse noise, existing methods simultaneously estimate sparse noise and sparse signal of interest. For estimating the sparse signal, without estimating the sparse noise, we construct a Relevance Vector Machine (RVM). In the RVM, sparse noise and ever present dense noise are treated through a combined noise model. Through simulations, we show the efficiency of new RVM for three applications: kernel regression, housing price prediction and compressed sensing.},\n  keywords = {belief networks;regression analysis;signal processing;dense noise;Bayesian RVM approach;sparse noise models outliers;relevance vector machine;combined noise model;kernel regression;housing price prediction;compressed sensing;Noise;Bayes methods;Kernel;Vectors;Standards;Compressed sensing;Equations;Robust regression;Bayesian learning;Relevance vector machine;Compressed sensing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922443.pdf},\n}\n\n
\n
\n\n\n
\n Using a Bayesian approach, we consider the problem of recovering sparse signals under additive sparse and dense noise. Typically, sparse noise models outliers, impulse bursts or data loss. To handle sparse noise, existing methods simultaneously estimate sparse noise and sparse signal of interest. For estimating the sparse signal, without estimating the sparse noise, we construct a Relevance Vector Machine (RVM). In the RVM, sparse noise and ever present dense noise are treated through a combined noise model. Through simulations, we show the efficiency of new RVM for three applications: kernel regression, housing price prediction and compressed sensing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparsity-aware learning in the context of echo cancelation: A set theoretic estimation approach.\n \n \n \n \n\n\n \n Kopsinis, Y.; Chouvardas, S.; and Theodoridis, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1846-1850, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Sparsity-awarePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952669,\n  author = {Y. Kopsinis and S. Chouvardas and S. Theodoridis},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Sparsity-aware learning in the context of echo cancelation: A set theoretic estimation approach},\n  year = {2014},\n  pages = {1846-1850},\n  abstract = {In this paper, the set-theoretic based adaptive filtering task is studied for the case where the input signal is nonstationary and may assume relatively small values. Such a scenario is often faced in practice, with a notable application that of echo cancellation. It turns out that very small input values can trigger undesirable behaviour of the algorithm leading to severe performance fluctuations. The source of this malfunction is geometrically investigated and a solution complying with the set-theoretic philosophy is proposed. The new algorithm is evaluated in realistic echo-cancellation scenarios and compared with state-of-the-art methods for echo cancellation such as the IPNLMS and IPAPA algorithms.},\n  keywords = {adaptive filters;echo suppression;set theory;sparsity-aware learning;echo cancellation;set theoretic estimation approach;set-theoretic based adaptive filtering task;IPAPA algorithm;IPNLMS algorithm;Vectors;Echo cancellers;Signal processing algorithms;Measurement;Projection algorithms;Noise;Adaptive filtering;APSM;Improved proportionate NLMS;echo cancellation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925809.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, the set-theoretic based adaptive filtering task is studied for the case where the input signal is nonstationary and may assume relatively small values. Such a scenario is often faced in practice, with a notable application that of echo cancellation. It turns out that very small input values can trigger undesirable behaviour of the algorithm leading to severe performance fluctuations. The source of this malfunction is geometrically investigated and a solution complying with the set-theoretic philosophy is proposed. The new algorithm is evaluated in realistic echo-cancellation scenarios and compared with state-of-the-art methods for echo cancellation such as the IPNLMS and IPAPA algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Greedy methods for simultaneous sparse approximation.\n \n \n \n \n\n\n \n Belmerhnia, L.; Djermoune, E.; and Brie, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1851-1855, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"GreedyPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952670,\n  author = {L. Belmerhnia and E. Djermoune and D. Brie},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Greedy methods for simultaneous sparse approximation},\n  year = {2014},\n  pages = {1851-1855},\n  abstract = {This paper extends greedy methods to simultaneous sparse approximation. This problem consists in finding good estimation of several input signals at once, using different linear combinations of a few elementary signals, drawn from a fixed collection. The sparse algorithms for which simultaneous versions are proposed are namely CoSaMP, OLS and SBR. These approaches are compared to Tropp's S-OMP algorithm using simulation signals. We show that in the case of signals exhibiting correlated components, the simultaneous versions of SBR and CoSaMP perform better than S-OMP and S-OLS.},\n  keywords = {approximation theory;greedy algorithms;signal representation;sparse matrices;OLS algorithm;SBR algorithms;CoSaMP;elementary signals;linear combinations;simultaneous sparse approximation;greedy methods;Approximation methods;Signal to noise ratio;Sparse matrices;Approximation algorithms;Vectors;Dictionaries;Standards;Simultaneous sparse approximation;Greedy algorithms;Orthogonal Matching Pursuit},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924687.pdf},\n}\n\n
\n
\n\n\n
\n This paper extends greedy methods to simultaneous sparse approximation. This problem consists in finding good estimation of several input signals at once, using different linear combinations of a few elementary signals, drawn from a fixed collection. The sparse algorithms for which simultaneous versions are proposed are namely CoSaMP, OLS and SBR. These approaches are compared to Tropp's S-OMP algorithm using simulation signals. We show that in the case of signals exhibiting correlated components, the simultaneous versions of SBR and CoSaMP perform better than S-OMP and S-OLS.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Faster-than-Nyquist signaling for next generation communication architectures.\n \n \n \n \n\n\n \n Modenini, A.; Rusek, F.; and Colavolpe, G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1856-1860, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Faster-than-NyquistPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952671,\n  author = {A. Modenini and F. Rusek and G. Colavolpe},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Faster-than-Nyquist signaling for next generation communication architectures},\n  year = {2014},\n  pages = {1856-1860},\n  abstract = {We discuss a few promising applications of the faster-than-Nyquist (FTN) signaling technique. Although proposed in the mid 70s, thanks to recent extensions this technique is taking on a new lease of life. In particular, we will discuss its applications to satellite systems for broadcasting transmissions, optical long-haul transmissions, and next-generation cellular systems, possibly equipped with a large scale antenna system (LSAS) at the base stations (BSs). Moreover, based on measurements with a 128 element antenna array, we analyze the spectral efficiency that can be achieved with simple receiver solutions in single carrier LSAS systems.},\n  keywords = {antenna arrays;cellular radio;next generation networks;signal processing;faster-than-Nyquist signaling;next generation communication architectures;FTN signaling technique;broadcasting transmissions;optical long-haul transmissions;next-generation cellular systems;large scale antenna system;base stations;BS;antenna array;single carrier LSAS systems;Receivers;Time-frequency analysis;Bandwidth;Antenna measurements;Satellites;Satellite broadcasting;Arrays},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926467.pdf},\n}\n\n
\n
\n\n\n
\n We discuss a few promising applications of the faster-than-Nyquist (FTN) signaling technique. Although proposed in the mid 70s, thanks to recent extensions this technique is taking on a new lease of life. In particular, we will discuss its applications to satellite systems for broadcasting transmissions, optical long-haul transmissions, and next-generation cellular systems, possibly equipped with a large scale antenna system (LSAS) at the base stations (BSs). Moreover, based on measurements with a 128 element antenna array, we analyze the spectral efficiency that can be achieved with simple receiver solutions in single carrier LSAS systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fractionally spaced non-linear equalization of faster than Nyquist signals.\n \n \n \n \n\n\n \n Tomasin, S.; and Benvenuto, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1861-1865, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"FractionallyPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952672,\n  author = {S. Tomasin and N. Benvenuto},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Fractionally spaced non-linear equalization of faster than Nyquist signals},\n  year = {2014},\n  pages = {1861-1865},\n  abstract = {Faster than Nyquist transmissions provide the opportunity of increasing data rate at the expenses of additional inter-symbol interference. The optimum receiver requires a maximum likelihood sequence detector, whose complexity grows exponentially with the number of filter taps and with the number of bits per symbol. In this paper we consider two suboptimal approaches based on non-linear equalization of the received signal. In order to further reduce the receiver complexity we consider an implementation of equalization filters in the frequency domain. The contributions of the paper are a) a receiver architecture for fractionally spaced non-linear equalizers, and b) efficient design methods of the equalization filters in the frequency domain. In particular, the derived optimal (in the mean square error sense) filters overcome approaches proposed in the literature.},\n  keywords = {frequency-domain analysis;intersymbol interference;maximum likelihood detection;fractionally spaced nonlinear equalization;faster than Nyquist signals;inter-symbol interference;optimum receiver;maximum likelihood sequence detector;filter taps;receiver complexity;frequency domain;optimal filters;Discrete Fourier transforms;Receivers;Niobium;Interference;Complexity theory;Bit error rate;Equalizers},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925457.pdf},\n}\n\n
\n
\n\n\n
\n Faster than Nyquist transmissions provide the opportunity of increasing data rate at the expenses of additional inter-symbol interference. The optimum receiver requires a maximum likelihood sequence detector, whose complexity grows exponentially with the number of filter taps and with the number of bits per symbol. In this paper we consider two suboptimal approaches based on non-linear equalization of the received signal. In order to further reduce the receiver complexity we consider an implementation of equalization filters in the frequency domain. The contributions of the paper are a) a receiver architecture for fractionally spaced non-linear equalizers, and b) efficient design methods of the equalization filters in the frequency domain. In particular, the derived optimal (in the mean square error sense) filters overcome approaches proposed in the literature.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n IB-DFE SIC based receiver structure for IA-precoded MC-CDMA systems.\n \n \n \n \n\n\n \n João, A.; Assunção, J.; Silva, A.; Dinis, R.; and Gameiro, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1866-1870, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"IB-DFEPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952693,\n  author = {A. João and J. Assunção and A. Silva and R. Dinis and A. Gameiro},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {IB-DFE SIC based receiver structure for IA-precoded MC-CDMA systems},\n  year = {2014},\n  pages = {1866-1870},\n  abstract = {Interference alignment (IA) is a promising technique that allows high capacity gains in interfering channels. On the other hand, iterative frequency-domain detection receivers based on the IB-DFE concept (Iterative Block Decision Feedback Equalization) can efficiently exploit the inherent space-frequency diversity of the MIMO MC-CDMA systems. In this paper we design a joint iterative IA precoding at the transmitter with IB-DFE successive interference cancellation (SIC) based receiver structure for MC-CDMA systems. The receiver is designed in two steps: first a linear filter is used to mitigate the inter-user aligned interference, and then an iterative frequency-domain receiver is designed to efficiently separate the spatial streams in the presence of residual inter-user aligned interference at the output of the filter. Our scheme achieves the maximum degrees of freedom provided by the IA precoding, while allowing an almost optimum space-diversity gain, with performance close to the matched filter bound (MFB).},\n  keywords = {code division multiple access;decision feedback equalisers;diversity reception;filtering theory;interference suppression;iterative decoding;matched filters;MIMO communication;radio receivers;radiofrequency interference;matched filter bound;optimum space-diversity gain;linear filter;interuser aligned interference mitigation;IB-DFE successive interference cancellation based receiver structure;joint iterative IA precoding;MIMO MC-CDMA systems;inherent space-frequency diversity;iterative block decision feedback equalization;iterative frequency-domain detection receivers;interfering channels;high capacity gains;interference alignment;IA-precoded MC-CDMA systems;IB-DFE SIC based receiver structure;Receivers;Interference;Equalizers;Multicarrier code division multiple access;Silicon carbide;MIMO;Transmitters;interference alignment;interference channels;iterative block equalization;MC-CDMA systems},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926233.pdf},\n}\n\n
\n
\n\n\n
\n Interference alignment (IA) is a promising technique that allows high capacity gains in interfering channels. On the other hand, iterative frequency-domain detection receivers based on the IB-DFE concept (Iterative Block Decision Feedback Equalization) can efficiently exploit the inherent space-frequency diversity of the MIMO MC-CDMA systems. In this paper we design a joint iterative IA precoding at the transmitter with IB-DFE successive interference cancellation (SIC) based receiver structure for MC-CDMA systems. The receiver is designed in two steps: first a linear filter is used to mitigate the inter-user aligned interference, and then an iterative frequency-domain receiver is designed to efficiently separate the spatial streams in the presence of residual inter-user aligned interference at the output of the filter. Our scheme achieves the maximum degrees of freedom provided by the IA precoding, while allowing an almost optimum space-diversity gain, with performance close to the matched filter bound (MFB).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improved interference aware precoding for cellular network-mimo systems.\n \n \n \n \n\n\n \n García Fernández, J. J.; Morales Céspedes, M.; Sánchez Fernández, M.; and Armada, A. G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1871-1874, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952694,\n  author = {J. J. {García Fernández} and M. {Morales Céspedes} and M. {Sánchez Fernández} and A. G. Armada},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Improved interference aware precoding for cellular network-mimo systems},\n  year = {2014},\n  pages = {1871-1874},\n  abstract = {An interference aware precoding scheme based on limited channel state information at the transmitter (CSIT) is considered for its use in the downlink of a cellular system. The transmitter precoder used is based on an MMSE-ZF criterion in order to maximize the user rate while the interference to other users is reduced. The proposed scheme also exploits the network topology, so that each BS can categorize the users into two groups, according to the level of interference that the BS is introducing in those users. On the receiver end, each user makes use of the whole channel state information at the receiver (CSIR) by employing an MMSE filter. This approach enables a reduction in the complexity of the system, while improving the performance of the whole network.},\n  keywords = {cellular radio;interference suppression;least mean squares methods;MIMO communication;precoding;radio receivers;radio transmitters;telecommunication network topology;wireless channels;interference aware precoding scheme;cellular network-MIMO system;limited channel state information at the transmitter precoder;CSIT;MMSE-ZF criterion;interference reduction;network topology;BS;channel state information at the receiver;CSIR;MMSE filter;system complexity reduction;multiple-input multiple-output system;minimum mean squared error;base station;zero forcing;Interference;Receivers;MIMO;Transmitters;Noise;Channel estimation;Channel state information;Multiple-Input Multiple-Output;Interference Aware;Channel State Information;MMSE;ZF},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926663.pdf},\n}\n\n
\n
\n\n\n
\n An interference aware precoding scheme based on limited channel state information at the transmitter (CSIT) is considered for its use in the downlink of a cellular system. The transmitter precoder used is based on an MMSE-ZF criterion in order to maximize the user rate while the interference to other users is reduced. The proposed scheme also exploits the network topology, so that each BS can categorize the users into two groups, according to the level of interference that the BS is introducing in those users. On the receiver end, each user makes use of the whole channel state information at the receiver (CSIR) by employing an MMSE filter. This approach enables a reduction in the complexity of the system, while improving the performance of the whole network.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Enhancing spectral efficiency in advanced multicarrier techniques: A challenge.\n \n \n \n \n\n\n \n Baltar, L. G.; Laas, T.; Newinger, M.; Mezghani, A.; and Nossek, J. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1875-1879, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EnhancingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952695,\n  author = {L. G. Baltar and T. Laas and M. Newinger and A. Mezghani and J. A. Nossek},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Enhancing spectral efficiency in advanced multicarrier techniques: A challenge},\n  year = {2014},\n  pages = {1875-1879},\n  abstract = {Advanced multicarrier systems, like the Offset-QAM filter bank based (OQAM-FBMC) ones, are gaining importance as candidates for the physical layer of the 5-th generation of wireless communications. One of the main advantages of FBMC, when compared to traditional cyclic prefix based OFDM, is its higher spectral efficiency. However, this gain can be lost again if the problem of training based channel estimation is not tackled correctly. This is due to the memory inserted by the longer pulse shaping and the loss of orthogonality of overlapping subcarriers. In this paper we approach the problem of training based channel estimation for FBMC systems. We propose an iterative algorithm based on the expectation maximization (EM) maximum likelihood (ML) that reduces the overhead and consequently improves the spectral efficiency.},\n  keywords = {4G mobile communication;channel bank filters;channel estimation;expectation-maximisation algorithm;OFDM modulation;quadrature amplitude modulation;expectation maximization maximum likelihood;iterative algorithm;training based channel estimation;cyclic prefix based OFDM;5G wireless communication;OQAM-FBMC systems;offset-QAM filter bank multicarrier;spectral efficiency;Training;Channel estimation;Broadband communication;Maximum likelihood estimation;Vectors;Convolution;OQAM;Filter Bank Multicarrier;Channel Estimation;ML estimation;Expectation Maximization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921363.pdf},\n}\n\n
\n
\n\n\n
\n Advanced multicarrier systems, like the Offset-QAM filter bank based (OQAM-FBMC) ones, are gaining importance as candidates for the physical layer of the 5-th generation of wireless communications. One of the main advantages of FBMC, when compared to traditional cyclic prefix based OFDM, is its higher spectral efficiency. However, this gain can be lost again if the problem of training based channel estimation is not tackled correctly. This is due to the memory inserted by the longer pulse shaping and the loss of orthogonality of overlapping subcarriers. In this paper we approach the problem of training based channel estimation for FBMC systems. We propose an iterative algorithm based on the expectation maximization (EM) maximum likelihood (ML) that reduces the overhead and consequently improves the spectral efficiency.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Digital signal processing techniques for multi-core fiber transmission using self-homodyne detection schemes.\n \n \n \n\n\n \n Mendinueta, J. M. D.; Luís, R. S.; Puttnam, B. J.; Sakaguchi, J.; Klaus, W.; Awaji, Y.; Wada, N.; Kanno, A.; and Kawanishi, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1880-1884, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952696,\n  author = {J. M. D. Mendinueta and R. S. Luís and B. J. Puttnam and J. Sakaguchi and W. Klaus and Y. Awaji and N. Wada and A. Kanno and T. Kawanishi},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Digital signal processing techniques for multi-core fiber transmission using self-homodyne detection schemes},\n  year = {2014},\n  pages = {1880-1884},\n  abstract = {We discuss digital signal processing (DSP) techniques for self-homodyne detection (SHD), multi-core fiber (MCF) transmission links, and related technologies. We focus on exploiting the reduced phase noise of self-homodyne multi-core fiber (SH-MCF) systems to enable DSP resource savings and describe digital receiver architectures that mixes signal and local oscillator in the digital domain.},\n  keywords = {homodyne detection;interference suppression;optical fibre communication;optical links;optical receivers;oscillators;phase noise;digital signal processing technique;self-homodyne detection scheme;SHD scheme;multicore fiber transmission link;phase noise reduction;SH-MCF system transmission link;DSP resource saving;digital receiver architecture;local signal oscillator;digital domain;Signal to noise ratio;Optical noise;Joints;Digital signal processing;Abstracts;Optical fibers;Coherent optical communications;optical communications DSP;self-homodyne optical systems;multi-core fiber},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n We discuss digital signal processing (DSP) techniques for self-homodyne detection (SHD), multi-core fiber (MCF) transmission links, and related technologies. We focus on exploiting the reduced phase noise of self-homodyne multi-core fiber (SH-MCF) systems to enable DSP resource savings and describe digital receiver architectures that mixes signal and local oscillator in the digital domain.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Realtime digital signal processing in coherent optical PDM-QPSK and PDM-16-QAM transmission.\n \n \n \n \n\n\n \n Noé, R.; Panhwar, M. F.; Wördehoff, C.; and Sandel, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1885-1889, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RealtimePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952697,\n  author = {R. Noé and M. F. Panhwar and C. Wördehoff and D. Sandel},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Realtime digital signal processing in coherent optical PDM-QPSK and PDM-16-QAM transmission},\n  year = {2014},\n  pages = {1885-1889},\n  abstract = {Coherent fiberoptic transmission with synchronous detection of 4, 8 or more bit/symbol enhances spectral efficiency. It relies on polarization-division multiplexing (PDM) and quadrature phase shift keying (QPSK) or higher-order quadrature amplitude modulation (QAM). A coherent polarization diversity, in-phase and quadrature receiver detects the optical field information. Its most important tasks are to recover the carrier in a laser phase noise tolerant manner and to control optical polarization electronically. We present suitable digital signal processing designs and their usage in realtime coherent transmission. We likewise discuss chromatic dispersion (CD) and polarization mode dispersion (PMD) equalization, needed for longer fibers.},\n  keywords = {optical fibre communication;optical fibre polarisation;optical information processing;optical receivers;phase noise;quadrature amplitude modulation;quadrature phase shift keying;signal processing;wavelength division multiplexing;real-time digital signal processing;coherent optical PDM-QPSK transmission;PDM-16-QAM transmission;coherent fiber-optic transmission;synchronous detection;spectral efficiency;polarization-division multiplexing;quadrature phase shift keying;higher-order quadrature amplitude modulation;QAM;coherent polarization diversity;quadrature receiver;in-phase receiver;optical field information detection;laser phase noise;optical polarization control;digital signal processing designs;chromatic dispersion;CD;polarization mode dispersion;PMD equalization;Optical fiber polarization;Receivers;Quadrature amplitude modulation;Digital signal processing;Phase shift keying;Optical fiber dispersion;Digital signal processing;coherent fiberoptic transmission;polarization-division multiplexing;polarization control;equalization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925583.pdf},\n}\n\n
\n
\n\n\n
\n Coherent fiberoptic transmission with synchronous detection of 4, 8 or more bit/symbol enhances spectral efficiency. It relies on polarization-division multiplexing (PDM) and quadrature phase shift keying (QPSK) or higher-order quadrature amplitude modulation (QAM). A coherent polarization diversity, in-phase and quadrature receiver detects the optical field information. Its most important tasks are to recover the carrier in a laser phase noise tolerant manner and to control optical polarization electronically. We present suitable digital signal processing designs and their usage in realtime coherent transmission. We likewise discuss chromatic dispersion (CD) and polarization mode dispersion (PMD) equalization, needed for longer fibers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Applications of expectation maximization algorithm for coherent optical communication.\n \n \n \n \n\n\n \n Zibar, D.; Winther, O.; Borkowski, R.; Monroy, I. T.; Carvalho, L.; and Oliveira, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1890-1894, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ApplicationsPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952698,\n  author = {D. Zibar and O. Winther and R. Borkowski and I. T. Monroy and L. Carvalho and J. Oliveira},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Applications of expectation maximization algorithm for coherent optical communication},\n  year = {2014},\n  pages = {1890-1894},\n  abstract = {In this invited paper, we present powerful statistical signal processing methods, used by machine learning community, and link them to current problems in optical communication. In particular, we will look into iterative maximum likelihood parameter estimation based on expectation maximization algorithm and its application in coherent optical communication systems for linear and nonlinear impairment mitigation. Furthermore, the estimated parameters are used to build the probabilistic model of the system for the synthetic impairment generation. It is shown numerically and experimentally that iterative parameter estimation based on expectation maximization algorithm is a powerful tool in combating system impairments such as non-linear phase noise, inphase and quadrature (I/Q) modulator imperfections and laser linewidth. We show experimentally that for a dispersion managed polarization multiplexed 16-quadrature amplitude modulation (QAM) system at 14 Gbaud a gain in the nonlinear system tolerance of up to 3 dB can be obtained. For, a dispersion unmanaged system this gain reduces to 0.5 dB. Moreover, we show that joint estimation of carrier frequency, phase, signal means and noise covariance, can be performed iteratively by employing expectation maximization. Using experimental data we show that joint carrier synchronization and detection offers an improvement of 0.5 dB in terms of input power compared to hard decision digital phaselocked loop (PLL) based carrier synchronization and demodulation.},\n  keywords = {expectation-maximisation algorithm;maximum likelihood estimation;modulators;optical communication;phase locked loops;quadrature amplitude modulation;signal processing;synchronisation;expectation maximization algorithm;coherent optical communication;statistical signal processing method;machine learning;iterative maximum likelihood parameter estimation;linear impairment mitigation;nonlinear impairment mitigation;probabilistic model;synthetic impairment generation;nonlinear phase noise;inphase and quadrature modulator imperfections;laser linewidth;I-Q modulator;dispersion managed polarization multiplexed QAM system;16-quadrature amplitude modulation system;carrier frequency estimation;phase signal means estimation;noise covariance;joint carrier synchronization;joint carrier detection;digital phase-locked loop;PLL based carrier synchronization;PLL based demodulation;Optical modulation;Optical polarization;Optical sensors;Abstracts;Communities;optical communication;machine learning;expectation maximization;nonlinear impairments},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569914463.pdf},\n}\n\n
\n
\n\n\n
\n In this invited paper, we present powerful statistical signal processing methods, used by machine learning community, and link them to current problems in optical communication. In particular, we will look into iterative maximum likelihood parameter estimation based on expectation maximization algorithm and its application in coherent optical communication systems for linear and nonlinear impairment mitigation. Furthermore, the estimated parameters are used to build the probabilistic model of the system for the synthetic impairment generation. It is shown numerically and experimentally that iterative parameter estimation based on expectation maximization algorithm is a powerful tool in combating system impairments such as non-linear phase noise, inphase and quadrature (I/Q) modulator imperfections and laser linewidth. We show experimentally that for a dispersion managed polarization multiplexed 16-quadrature amplitude modulation (QAM) system at 14 Gbaud a gain in the nonlinear system tolerance of up to 3 dB can be obtained. For, a dispersion unmanaged system this gain reduces to 0.5 dB. Moreover, we show that joint estimation of carrier frequency, phase, signal means and noise covariance, can be performed iteratively by employing expectation maximization. Using experimental data we show that joint carrier synchronization and detection offers an improvement of 0.5 dB in terms of input power compared to hard decision digital phaselocked loop (PLL) based carrier synchronization and demodulation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Audio and video service provision in deep-access integrated optical-wireless networks.\n \n \n \n \n\n\n \n Llorente, R.; and Morant, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1895-1899, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AudioPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952699,\n  author = {R. Llorente and M. Morant},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Audio and video service provision in deep-access integrated optical-wireless networks},\n  year = {2014},\n  pages = {1895-1899},\n  abstract = {Audio and video streaming can be provided in deep-access optical-wireless networks in a cost-effective way integrating the optical access, the optical in-building network and the wireless link at customer premises. Orthogonal frequency division multiplexing (OFDM) modulation is an interesting candidate for the integrated optical-wireless provision of this service and has been selected by most of the wireless communication standards due to the high spectral efficiency and bit rate capabilities combined with its robustness to transmission channel impairments and inter-symbol interference (ISI). In this paper, the successful transmission of commercially available OFDM-based signals following different wireless standards is demonstrated and the performance of the different digital signal processing algorithms implemented in their communication stacks is analyzed. Different optical transmission media and different OFDM transmission frequency bands are evaluated experimentally including the 60 GHz band. The wireless range coverage after the integrated optical-wireless transmission is also reported from the experimental work.},\n  keywords = {audio streaming;OFDM modulation;optical fibre networks;radio networks;video streaming;deep-access integrated optical-wireless networks;video streaming;audio streaming;orthogonal frequency division multiplexing modulation;wireless communication;intersymbol interference;ISI;OFDM-based signals;optical transmission media;integrated optical-wireless transmission;frequency 60 GHz;Optical fibers;OFDM;WiMAX;Integrated optics;Optical fiber networks;Audio and video;integrated optical-wireless;orthogonal frequency division multiplexing (OFDM)},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917233.pdf},\n}\n\n
\n
\n\n\n
\n Audio and video streaming can be provided in deep-access optical-wireless networks in a cost-effective way integrating the optical access, the optical in-building network and the wireless link at customer premises. Orthogonal frequency division multiplexing (OFDM) modulation is an interesting candidate for the integrated optical-wireless provision of this service and has been selected by most of the wireless communication standards due to the high spectral efficiency and bit rate capabilities combined with its robustness to transmission channel impairments and inter-symbol interference (ISI). In this paper, the successful transmission of commercially available OFDM-based signals following different wireless standards is demonstrated and the performance of the different digital signal processing algorithms implemented in their communication stacks is analyzed. Different optical transmission media and different OFDM transmission frequency bands are evaluated experimentally including the 60 GHz band. The wireless range coverage after the integrated optical-wireless transmission is also reported from the experimental work.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n OSSB-OFDM transmission performance using a dual electro absorption modulated laser in NG-PON context.\n \n \n \n\n\n \n Aupetit-Berthelemot, C.; Anfray, T.; Chaibi, M. E.; Erasme, D.; Aubin, G.; Kazmierski, C.; and Chanclou, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1900-1904, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952700,\n  author = {C. Aupetit-Berthelemot and T. Anfray and M. E. Chaibi and D. Erasme and G. Aubin and C. Kazmierski and P. Chanclou},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {OSSB-OFDM transmission performance using a dual electro absorption modulated laser in NG-PON context},\n  year = {2014},\n  pages = {1900-1904},\n  abstract = {We report system simulation and experimental results on enhanced transmission distance over standard single mode fiber thanks to a novel dual modulation technique that generates a wideband optical single side band orthogonal frequency division multiplexing (OSSB-OFDM) signal using a low-cost, integrated, dual RF access electro-absorption modulated laser. We obtained in experimentation and by simulation a bit error rate (BER) lower than 10-3 for 11 Gb/s up to 200 km in an amplified point-to-point configuration for an optical single side band discrete multi-tone (OSSB-DMT) signal. We also experiment in simulation conventional OFDM at 25 Gb/s in point-to-multipoint architecture and we show that the transmission reach can be extended to 55 km for a BER at 10-3 thanks to the new technique we have developed and implemented.},\n  keywords = {electroabsorption;electro-optical modulation;error statistics;next generation networks;OFDM modulation;passive optical networks;radio links;next generation passive optical networks;NG-PON context;OSSB-OFDM transmission;standard single mode fiber;dual modulation;wideband optical single side band;orthogonal frequency division multiplexing;dual RF access electro-absorption modulated laser;bit error rate;BER;amplified point-to-point configuration;point-to-multipoint architecture;bit rate 25 Gbit/s;Amplitude modulation;OFDM;Optical modulation;Passive optical networks;Frequency modulation;Nonlinear optics;Optical Communications;Optical Sources},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n We report system simulation and experimental results on enhanced transmission distance over standard single mode fiber thanks to a novel dual modulation technique that generates a wideband optical single side band orthogonal frequency division multiplexing (OSSB-OFDM) signal using a low-cost, integrated, dual RF access electro-absorption modulated laser. We obtained in experimentation and by simulation a bit error rate (BER) lower than 10-3 for 11 Gb/s up to 200 km in an amplified point-to-point configuration for an optical single side band discrete multi-tone (OSSB-DMT) signal. We also experiment in simulation conventional OFDM at 25 Gb/s in point-to-multipoint architecture and we show that the transmission reach can be extended to 55 km for a BER at 10-3 thanks to the new technique we have developed and implemented.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 2-D angle of arrival estimation using a one-dimensional antenna array.\n \n \n \n \n\n\n \n Al-Jazzar, S. O.; Strangeways, H. J.; and McLernon, D. C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1905-1909, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"2-DPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952701,\n  author = {S. O. Al-Jazzar and H. J. Strangeways and D. C. McLernon},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {2-D angle of arrival estimation using a one-dimensional antenna array},\n  year = {2014},\n  pages = {1905-1909},\n  abstract = {In this paper, a two-dimensional (2-D) angle of arrival (AOA) estimator is presented for vertically polarised waves in which a one-dimensional (1-D) antenna array is used. Many 2-D AOA estimators were previously developed to estimate elevation and azimuth angles. These estimators require a 2-D antenna array setup such as the L-shaped or parallel antenna 1-D arrays. In this paper a 2-D AOA estimator is presented which requires only a 1-D antenna array. This presented method is named Estimation of 2-D Angle of arrival using Reduced antenna array dimension (EAR). The EAR estimator utilises the antenna radiation pattern factor to reduce the required antenna array dimensionality. Thus, 2-D AOA estimation is possible using antenna arrays of reduced size and with a minimum of two elements only, which is very beneficial in applications with size and space limitations. Simulation results are presented to show the performance of the presented method.},\n  keywords = {antenna arrays;antenna radiation patterns;direction-of-arrival estimation;2-d angle of arrival estimation;one-dimensional antenna array;two-dimensional angle of arrival estimator;2-D AOA estimators;azimuth angles;2-D antenna array setup;1-D antenna array;reduced antenna array dimension;EAR estimator;antenna radiation pattern factor;antenna array dimensionality;2-D AOA estimation;Estimation;Arrays;Antenna arrays;Multiple signal classification;Azimuth;Ear;Antenna radiation patterns;2-D AOA Estimation;Statistical Signal Processing;Subspace Methods},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569908963.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a two-dimensional (2-D) angle of arrival (AOA) estimator is presented for vertically polarised waves in which a one-dimensional (1-D) antenna array is used. Many 2-D AOA estimators were previously developed to estimate elevation and azimuth angles. These estimators require a 2-D antenna array setup such as the L-shaped or parallel antenna 1-D arrays. In this paper a 2-D AOA estimator is presented which requires only a 1-D antenna array. This presented method is named Estimation of 2-D Angle of arrival using Reduced antenna array dimension (EAR). The EAR estimator utilises the antenna radiation pattern factor to reduce the required antenna array dimensionality. Thus, 2-D AOA estimation is possible using antenna arrays of reduced size and with a minimum of two elements only, which is very beneficial in applications with size and space limitations. Simulation results are presented to show the performance of the presented method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive waveform selection and target tracking by wideband multistatic radar/sonar systems.\n \n \n \n \n\n\n \n Nguyen, N. H.; Doğançay, K.; and Davis, L. M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1910-1914, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952702,\n  author = {N. H. Nguyen and K. Doğançay and L. M. Davis},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive waveform selection and target tracking by wideband multistatic radar/sonar systems},\n  year = {2014},\n  pages = {1910-1914},\n  abstract = {An adaptive waveform selection algorithm for target tracking by multistatic radar/sonar systems in wideband environments is presented to minimize the tracking mean squared error. The proposed selection algorithm is developed based on the minimization of the trace of error covariance matrix for the target state estimates (i.e. the target position and target velocity). This covariance matrix can be computed using the Cramér-Rao lower bounds of the wideband radar/sonar measurements. The performance advantage of the proposed adaptive waveform selection algorithm over the conventional fixed waveforms with minimum and maximum time-bandwidth products is demonstrated by simulation examples using various FM waveform classes.},\n  keywords = {adaptive signal processing;covariance matrices;mean square error methods;radar signal processing;sonar signal processing;target tracking;adaptive waveform selection algorithm;target tracking;wideband multistatic radar systems;wideband multistatic sonar systems;tracking mean squared error minimization;error covariance matrix trace minimization;target state estimates;target position;target velocity;Cramér-Rao lower bounds;minimum time-bandwidth products;maximum time-bandwidth products;Target tracking;Radar tracking;Wideband;Sonar;Multistatic radar;Receivers;adaptive waveform selection;wideband;multistatic;radar/sonar;target tracking},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921899.pdf},\n}\n\n
\n
\n\n\n
\n An adaptive waveform selection algorithm for target tracking by multistatic radar/sonar systems in wideband environments is presented to minimize the tracking mean squared error. The proposed selection algorithm is developed based on the minimization of the trace of error covariance matrix for the target state estimates (i.e. the target position and target velocity). This covariance matrix can be computed using the Cramér-Rao lower bounds of the wideband radar/sonar measurements. The performance advantage of the proposed adaptive waveform selection algorithm over the conventional fixed waveforms with minimum and maximum time-bandwidth products is demonstrated by simulation examples using various FM waveform classes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Alternating maximization algorithm for the broadcast beamforming.\n \n \n \n \n\n\n \n Demir, Ö. T.; and Tuncer, T. E.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1915-1919, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AlternatingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952703,\n  author = {Ö. T. Demir and T. E. Tuncer},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Alternating maximization algorithm for the broadcast beamforming},\n  year = {2014},\n  pages = {1915-1919},\n  abstract = {Semidefinite relaxation (SDR) is a powerful approach to solve nonconvex optimization problems involving rank condition. However its performance becomes unacceptable for certain cases. In this paper, a nonconvex equivalent formulation without the rank condition is presented for the broadcast beamforming problem. This new formulation is exploited to obtain an alternating optimization method which is shown to converge to the local optimum rank one solution. Proposed method opens up new possibilities in different applications. Simulations show that the new method is very effective and can attain global optimum especially when the number of users is low.},\n  keywords = {array signal processing;broadcast communication;concave programming;radiocommunication;alternating optimization method;nonconvex equivalent formulation;rank condition;nonconvex optimization problems;semidefinite relaxation;broadcast beamforming;alternating maximization algorithm;Array signal processing;Signal to noise ratio;Optimization;Vectors;Convergence;Arrays;Symmetric matrices;Transmit beamforming;multicast beamforming;semidefinite relaxation;convex optimization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923169.pdf},\n}\n\n
\n
\n\n\n
\n Semidefinite relaxation (SDR) is a powerful approach to solve nonconvex optimization problems involving rank condition. However its performance becomes unacceptable for certain cases. In this paper, a nonconvex equivalent formulation without the rank condition is presented for the broadcast beamforming problem. This new formulation is exploited to obtain an alternating optimization method which is shown to converge to the local optimum rank one solution. Proposed method opens up new possibilities in different applications. Simulations show that the new method is very effective and can attain global optimum especially when the number of users is low.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Closed-form approximations of the PAPR distribution for Multi-Carrier Modulation systems.\n \n \n \n \n\n\n \n Chafii, M.; Palicot, J.; and Gribonval, R.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1920-1924, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Closed-formPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952704,\n  author = {M. Chafii and J. Palicot and R. Gribonval},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Closed-form approximations of the PAPR distribution for Multi-Carrier Modulation systems},\n  year = {2014},\n  pages = {1920-1924},\n  abstract = {The theoretical analysis of the Peak-to-Average Power Ratio (PAPR) distribution for an Orthogonal Frequency Division Multiplexing (OFDM) system, depends on the particular waveform considered in the modulation system. In this paper, we generalize this analysis by considering the Generalized Waveforms for Multi-Carrier (GWMC) modulation system based on any family of modulation functions, and we derive a general approximate expression for the Cumulative Distribution Function (CDF) of its continuous and discrete time PAPR. These equations allow us to directly find the expressions of the PAPR distribution for any particular family of modulation functions, and they can be applied to control the PAPR performance by choosing the appropriate functions.},\n  keywords = {OFDM modulation;statistical distributions;closed form approximation;PAPR distribution;multicarrier modulation systems;peak-to-average power ratio distribution;orthogonal frequency division multiplexing;OFDM system;generalized waveforms for multicarrier modulation;GWMC modulation;cumulative distribution function;continuous time papr;discrete time papr;Peak to average power ratio;Modulation;Approximation methods;Wavelet transforms;Frequency division multiplexing;Prototypes;Distribution;Peak-to-Average Power Ratio (PAPR);Orthogonal Frequency Division Multiplexing (OFDM);Generalized Waveforms for Multi-Carrier (GWMC);Multi-Carrier Modulation (MCM)},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923199.pdf},\n}\n\n
\n
\n\n\n
\n The theoretical analysis of the Peak-to-Average Power Ratio (PAPR) distribution for an Orthogonal Frequency Division Multiplexing (OFDM) system, depends on the particular waveform considered in the modulation system. In this paper, we generalize this analysis by considering the Generalized Waveforms for Multi-Carrier (GWMC) modulation system based on any family of modulation functions, and we derive a general approximate expression for the Cumulative Distribution Function (CDF) of its continuous and discrete time PAPR. These equations allow us to directly find the expressions of the PAPR distribution for any particular family of modulation functions, and they can be applied to control the PAPR performance by choosing the appropriate functions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Path uncertainty robust beamforming.\n \n \n \n \n\n\n \n Stanton, R.; and Brookes, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1925-1929, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PathPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952705,\n  author = {R. Stanton and M. Brookes},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Path uncertainty robust beamforming},\n  year = {2014},\n  pages = {1925-1929},\n  abstract = {Conventional beamformer design assumes that the phase differences between the received sensor signals are a deterministic function of the array and source geometry. In fact however, these phase differences are subject to random variations arising both from source and sensor position uncertainties and from fluctuations in sound velocity. We present a framework for modelling these uncertainties and show that improved beamformers are obtained when they are taken into account.},\n  keywords = {acoustic signal processing;array signal processing;sound velocity;sensor position uncertainty;source geometry;deterministic function;sensor signals;phase differences;path uncertainty robust beamforming;Robustness;Arrays;Uncertainty;Signal to noise ratio;Correlation;Array signal processing;Vectors;robust beamforming;distributed array;SNR beamformer;steering vector mismatch},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923701.pdf},\n}\n\n
\n
\n\n\n
\n Conventional beamformer design assumes that the phase differences between the received sensor signals are a deterministic function of the array and source geometry. In fact however, these phase differences are subject to random variations arising both from source and sensor position uncertainties and from fluctuations in sound velocity. We present a framework for modelling these uncertainties and show that improved beamformers are obtained when they are taken into account.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust DOA estimation of harmonic signals using constrained filters on phase estimates.\n \n \n \n \n\n\n \n Karimian-Azari, S.; Jensen, J. R.; and Christensen, M. G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1930-1934, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952706,\n  author = {S. Karimian-Azari and J. R. Jensen and M. G. Christensen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Robust DOA estimation of harmonic signals using constrained filters on phase estimates},\n  year = {2014},\n  pages = {1930-1934},\n  abstract = {In array signal processing, distances between receivers, e.g., microphones, cause time delays depending on the direction of arrival (DOA) of a signal source. We can then estimate the DOA from the time-difference of arrival (TDOA) estimates. However, many conventional DOA estimators based on TDOA estimates are not optimal in colored noise. In this paper, we estimate the DOA of a harmonic signal source from multi-channel phase estimates, which relate to narrowband TDOA estimates. More specifically, we design filters to apply on phase estimates to obtain a DOA estimate with minimum variance. Using a linear array and harmonic constraints, we design optimal filters based on estimated noise statistics. Therefore, the proposed method is robust against different noise scenarios. In colored noise, simulation results confirm that the proposed method outperforms an optimal state-of-the-art weighted least-squares (WLS) DOA estimator.},\n  keywords = {array signal processing;direction-of-arrival estimation;harmonic analysis;least squares approximations;phase estimation;constrained filters;array signal processing;direction of arrival estimation;time-difference of arrival;harmonic signal source;multichannel phase estimation;narrowband TDOA estimation;linear array;harmonic constraints;optimal filter design;noise statistics;colored noise;weighted least-squares DOA estimator;WLS DOA estimator;Direction-of-arrival estimation;Harmonic analysis;Estimation;Microphones;Arrays;Signal to noise ratio;Audio signal;harmonic model;direction of arrival (DOA);time-difference of arrival (TDOA)},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925155.pdf},\n}\n\n
\n
\n\n\n
\n In array signal processing, distances between receivers, e.g., microphones, cause time delays depending on the direction of arrival (DOA) of a signal source. We can then estimate the DOA from the time-difference of arrival (TDOA) estimates. However, many conventional DOA estimators based on TDOA estimates are not optimal in colored noise. In this paper, we estimate the DOA of a harmonic signal source from multi-channel phase estimates, which relate to narrowband TDOA estimates. More specifically, we design filters to apply on phase estimates to obtain a DOA estimate with minimum variance. Using a linear array and harmonic constraints, we design optimal filters based on estimated noise statistics. Therefore, the proposed method is robust against different noise scenarios. In colored noise, simulation results confirm that the proposed method outperforms an optimal state-of-the-art weighted least-squares (WLS) DOA estimator.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A unifying approach to minimal problems in collinear and planar TDOA sensor network self-calibration.\n \n \n \n \n\n\n \n Ask, E.; Kuang, Y.; and Åström, K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1935-1939, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952707,\n  author = {E. Ask and Y. Kuang and K. Åström},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A unifying approach to minimal problems in collinear and planar TDOA sensor network self-calibration},\n  year = {2014},\n  pages = {1935-1939},\n  abstract = {This work presents a study of sensor network calibration from time-difference-of-arrival (TDOA) measurements for cases when the dimensions spanned by the receivers and the transmitters differ. This could for example be if receivers are restricted to a line or plane or if the transmitting objects are moving linearly in space. Such calibration arises in several applications such as calibration of (acoustic or ultra-sound) microphone arrays, and radio antenna networks. We propose a non-iterative algorithm based on recent stratified approaches: (i) rank constraints on modified measurement matrix, (ii) factorization techniques that determine transmitters and receivers up to unknown affine transformation and (iii) determining the affine stratification using remaining non-linear constraints. This results in a unified approach to solve almost all minimal problems. Such algorithms are important components for systems for self-localization. Experiments are shown both for simulated and real data with promising results.},\n  keywords = {antennas;calibration;microphone arrays;sensors;self-localization;minimal problems;affine stratification;factorization techniques;rank constraints;noniterative algorithm;radio antenna networks;microphone arrays;transmitters;receivers;time-difference-of-arrival measurements;self-calibration;planar TDOA sensor network;collinear TDOA sensor network;unifying approach;Receivers;Microphones;Equations;Transmitters;Calibration;Synchronization;Transmission line matrix methods;Time-difference-of-arrival;anchor-free calibration;sensor networks},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925215.pdf},\n}\n\n
\n
\n\n\n
\n This work presents a study of sensor network calibration from time-difference-of-arrival (TDOA) measurements for cases when the dimensions spanned by the receivers and the transmitters differ. This could for example be if receivers are restricted to a line or plane or if the transmitting objects are moving linearly in space. Such calibration arises in several applications such as calibration of (acoustic or ultra-sound) microphone arrays, and radio antenna networks. We propose a non-iterative algorithm based on recent stratified approaches: (i) rank constraints on modified measurement matrix, (ii) factorization techniques that determine transmitters and receivers up to unknown affine transformation and (iii) determining the affine stratification using remaining non-linear constraints. This results in a unified approach to solve almost all minimal problems. Such algorithms are important components for systems for self-localization. Experiments are shown both for simulated and real data with promising results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Space-time signal subspace estimation for wide-band acoustic arrays.\n \n \n \n \n\n\n \n Di Claudio, E. D.; and Jacovitti, G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1940-1944, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Space-timePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952708,\n  author = {E. D. {Di Claudio} and G. Jacovitti},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Space-time signal subspace estimation for wide-band acoustic arrays},\n  year = {2014},\n  pages = {1940-1944},\n  abstract = {Acoustic array applications are generally characterized by very large signal bandwidth. Most existing wide-band direction of arrival (DOA) estimators are based on binning in the frequency domain, so that within each bin the signal model is considered approximately narrow-band. In this work the basic inconsistency of the commonly used binning is first shown. It is shown that the recent Space Time MUSIC (ST-MUSIC) method, which estimates a set of narrow-band signal subspaces directly from the space-time array covariance and combines them within a Weighted Subspace Fitting paradigm, can restore wide-band DOA estimation consistency in most scenarios, obtaining a large variance improvement at high signal to noise ratio (SNR). In addition, a refined ST-MUSIC subspace weighting is proposed to improve accuracy, especially at low SNR.},\n  keywords = {acoustic signal processing;direction-of-arrival estimation;space time signal subspace estimation;wide band acoustic arrays;acoustic array applications;signal bandwidth;direction of arrival;DOA estimators;frequency domain;signal model;space time MUSIC;ST-MUSIC method;narrow-band signal subspaces;weighted subspace fitting paradigm;signal to noise ratio;SNR;Direction-of-arrival estimation;Signal to noise ratio;Estimation;Vectors;Acoustics;Acoustic array;Wide-band direction finding;Weighted Subspace Fitting;ST-MUSIC;UWB communications},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925315.pdf},\n}\n\n
\n
\n\n\n
\n Acoustic array applications are generally characterized by very large signal bandwidth. Most existing wide-band direction of arrival (DOA) estimators are based on binning in the frequency domain, so that within each bin the signal model is considered approximately narrow-band. In this work the basic inconsistency of the commonly used binning is first shown. It is shown that the recent Space Time MUSIC (ST-MUSIC) method, which estimates a set of narrow-band signal subspaces directly from the space-time array covariance and combines them within a Weighted Subspace Fitting paradigm, can restore wide-band DOA estimation consistency in most scenarios, obtaining a large variance improvement at high signal to noise ratio (SNR). In addition, a refined ST-MUSIC subspace weighting is proposed to improve accuracy, especially at low SNR.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Informed separation of dependent sources using joint matrix decomposition.\n \n \n \n \n\n\n \n Boudjellal, A.; Abed-Meraim, K.; Belouchrani, A.; and Ravier, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1945-1949, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"InformedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952709,\n  author = {A. Boudjellal and K. Abed-Meraim and A. Belouchrani and P. Ravier},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Informed separation of dependent sources using joint matrix decomposition},\n  year = {2014},\n  pages = {1945-1949},\n  abstract = {This paper deals with the separation problem of dependent sources. The separation is made possible thanks to side information on the dependence nature of the considered sources. In this work, we first show how this side information can be used to achieve desired source separation using joint matrix decomposition techniques. Indeed, in the case of statistically independent sources, many BSS methods are based on joint matrix diagonalization. In our case, we replace the target diagonal structure by appropriate non diagonal one which reflects the dependence nature of the sources. This new concept is illustrated with two simple 2×2 source separation exampleswhere second-order-statistics and high-order-statistics are used respectively.},\n  keywords = {blind source separation;matrix decomposition;statistical analysis;dependent source informed separation;joint matrix decomposition techniques;statistically independent sources;BSS methods;joint matrix diagonalization;target diagonal structure;second-order-statistics;high-order-statistics;blind source separation method;Matrix decomposition;Joints;Covariance matrices;Source separation;Data models;Technological innovation;Signal processing algorithms;Informed Source Separation;Dependent Source Separation;Matrix Joint Decomposition;Alternating Least Squares;Second-order and High-order-Statistics},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925353.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with the separation problem of dependent sources. The separation is made possible thanks to side information on the dependence nature of the considered sources. In this work, we first show how this side information can be used to achieve desired source separation using joint matrix decomposition techniques. Indeed, in the case of statistically independent sources, many BSS methods are based on joint matrix diagonalization. In our case, we replace the target diagonal structure by appropriate non diagonal one which reflects the dependence nature of the sources. This new concept is illustrated with two simple 2×2 source separation exampleswhere second-order-statistics and high-order-statistics are used respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Range-doppler radar target detection using denoising within the compressive sensing framework.\n \n \n \n \n\n\n \n Akin Sevimli, R.; Tofighi, M.; and Cetin, A. E.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1950-1954, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Range-dopplerPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952710,\n  author = {R. {Akin Sevimli} and M. Tofighi and A. E. Cetin},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Range-doppler radar target detection using denoising within the compressive sensing framework},\n  year = {2014},\n  pages = {1950-1954},\n  abstract = {Compressive sensing (CS) idea enables the reconstruction of a sparse signal from a small set of measurements. CS approach has applications in many practical areas. One of the areas is radar systems. In this article, the radar ambiguity function is denoised within the CS framework. A new denoising method on the projection onto the epigraph set of the convex function is also developed for this purpose. This approach is compared to the other CS reconstruction algorithms. Experimental results are presented1.},\n  keywords = {Doppler radar;graph theory;object detection;radar signal processing;set theory;signal denoising;signal reconstruction;range-Doppler radar target detection;radar ambiguity function;epigraph set;convex function;CS algorithms;sparse signal reconstruction;denoising method;compressive sensing framework;Vectors;Noise reduction;Compressed sensing;Signal processing algorithms;Matching pursuit algorithms;Radar imaging;Compressive Sensing;Ambiguity Function;Radar Signal Processing;Denoising},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925867.pdf},\n}\n\n
\n
\n\n\n
\n Compressive sensing (CS) idea enables the reconstruction of a sparse signal from a small set of measurements. CS approach has applications in many practical areas. One of the areas is radar systems. In this article, the radar ambiguity function is denoised within the CS framework. A new denoising method on the projection onto the epigraph set of the convex function is also developed for this purpose. This approach is compared to the other CS reconstruction algorithms. Experimental results are presented1.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 3-D array configuration using multiple regular tetrahedra for high-resolution 2-D DOA estimation.\n \n \n \n \n\n\n \n Doi, Y.; Ichige, K.; Arai, H.; Matsuno, H.; and Nakano, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1955-1959, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"3-DPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952711,\n  author = {Y. Doi and K. Ichige and H. Arai and H. Matsuno and M. Nakano},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {3-D array configuration using multiple regular tetrahedra for high-resolution 2-D DOA estimation},\n  year = {2014},\n  pages = {1955-1959},\n  abstract = {This paper presents a novel 3-D array configuration using multiple regular tetrahedra which enables high resolution 2-D DOA estimation. The proposed array configuration has better DOA estimation performance as that of the conventional 3-D array configuration for uncorrelated waves, and can be rear-ranged into the cuboid array configurationwhich can estimate DOAs of correlated waves. Performance of the proposed 3-D array configuration is evaluated through a computer simulation.},\n  keywords = {array signal processing;direction-of-arrival estimation;3d array configuration;multiple regular tetrahedra;high-resolution 2-d DOA estimation;DOA estimation performance;uncorrelated waves;cuboid array configuration;computer simulation;Signal to noise ratio;Estimation;Laboratories;Abstracts;Correlation;Signal resolution;Three-dimensional displays;direction of arrival estimation;array antenna;array signal processing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926907.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a novel 3-D array configuration using multiple regular tetrahedra which enables high resolution 2-D DOA estimation. The proposed array configuration has better DOA estimation performance as that of the conventional 3-D array configuration for uncorrelated waves, and can be rear-ranged into the cuboid array configurationwhich can estimate DOAs of correlated waves. Performance of the proposed 3-D array configuration is evaluated through a computer simulation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimal adaptive transmit beamforming for cognitive MIMO sonar in a shallow water waveguide.\n \n \n \n \n\n\n \n Sharaga, N.; and Tabrikian, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1960-1964, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OptimalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952712,\n  author = {N. Sharaga and J. Tabrikian},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Optimal adaptive transmit beamforming for cognitive MIMO sonar in a shallow water waveguide},\n  year = {2014},\n  pages = {1960-1964},\n  abstract = {This paper addresses the problem of adaptive beamforming for target localization by active cognitive multiple-input multiple-output (MIMO) sonar in a shallow water waveguide. Recently, a sequential waveform design approach for estimation of parameters of a linear system was proposed. In this approach, at each step, the transmit beampattern is determined based on previous observations. The criterion used for waveform design is the Bayesian Cramér-Rao bound (BCRB) for estimation of the unknown system parameters. In this paper, this method is used for target localization in a shallow water waveguide, and it is extended to account for environmental uncertainties which are typical to underwater acoustic environments. The simulations show the sensitivity of the localization performance of the method at different environmental prior uncertainties.},\n  keywords = {array signal processing;cognitive radio;MIMO communication;parameter estimation;sonar;optimal adaptive transmit beamforming;cognitive MIMO sonar;shallow water waveguide;adaptive beamforming;target localization;cognitive multiple-input multiple-output sonar;sequential waveform design;parameters estimation;linear system;Bayesian Cramér-Rao bound;underwater acoustic environments;Uncertainty;Array signal processing;Vectors;MIMO;Transmission line matrix methods;Adaptation models;Optimization;MIMO sonar;cognitive sonar;sequential waveform design;adaptive beamforming;underwater acoustics},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569927075.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the problem of adaptive beamforming for target localization by active cognitive multiple-input multiple-output (MIMO) sonar in a shallow water waveguide. Recently, a sequential waveform design approach for estimation of parameters of a linear system was proposed. In this approach, at each step, the transmit beampattern is determined based on previous observations. The criterion used for waveform design is the Bayesian Cramér-Rao bound (BCRB) for estimation of the unknown system parameters. In this paper, this method is used for target localization in a shallow water waveguide, and it is extended to account for environmental uncertainties which are typical to underwater acoustic environments. The simulations show the sensitivity of the localization performance of the method at different environmental prior uncertainties.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A lower bound for passive sensor-network auto-localization.\n \n \n \n\n\n \n Vincent, R.; Carmona, M.; Michel, O.; and Lacoume, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1965-1969, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952733,\n  author = {R. Vincent and M. Carmona and O. Michel and J. Lacoume},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A lower bound for passive sensor-network auto-localization},\n  year = {2014},\n  pages = {1965-1969},\n  abstract = {In this paper a lower-bound for estimation of inter-sensor propagation delay using sources of opportunity is presented. This approach is referred to as passive identification. It relies on Ward identity, which is extended to the case of non white sources. Performances are studied in the case of an homogeneous non dispersive linear and time invariant wave propagation medium, under the assumption that many independent sources impige on the sensor array.},\n  keywords = {geometry;radiowave propagation;wireless sensor networks;passive sensor-network autolocalization;intersensor propagation delay estimation;passive identification;ward identity;homogeneous nondispersive linear wave propagation medium;homogeneous time invariant wave propagation medium;variance lower bound;geometry estimation;Green's function methods;Estimation;Correlation;Noise;Bandwidth;Propagation delay;Mathematical model;Passive sensor network autolocalization;variance lower bound;Ward identity},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n In this paper a lower-bound for estimation of inter-sensor propagation delay using sources of opportunity is presented. This approach is referred to as passive identification. It relies on Ward identity, which is extended to the case of non white sources. Performances are studied in the case of an homogeneous non dispersive linear and time invariant wave propagation medium, under the assumption that many independent sources impige on the sensor array.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Interval-based localization using sensors mobility and fingerprints in decentralized sensor networks.\n \n \n \n \n\n\n \n Lv, X.; Mourad-Chehade, F.; and Snoussi, H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1970-1974, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Interval-basedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952734,\n  author = {X. Lv and F. Mourad-Chehade and H. Snoussi},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Interval-based localization using sensors mobility and fingerprints in decentralized sensor networks},\n  year = {2014},\n  pages = {1970-1974},\n  abstract = {This paper is focused on the decentralized localization problem of mobile sensors in wireless sensor networks. Based on a combined localization technique, it uses accelerometer, gyroscope and fingerprinting information to solve the positioning issue. Using the sensors mobility, the proposed method computes first estimates of sensors positions. It then proceeds to a decentralized localization scheme, where the network is divided to different zones. RSSIs fingerprints are jointly used with mobility information in order to compute position estimates. Final position estimates are obtained by means of interval analysis where all uncertainties are considered throughout the estimation process.},\n  keywords = {accelerometers;Global Positioning System;gyroscopes;wireless sensor networks;mobile sensors;wireless sensor networks;accelerometer;gyroscope;fingerprinting information;sensors positions;decentralized localization scheme;interval analysis;Fingerprint recognition;Abstracts;Three-dimensional displays;Robots;Heuristic algorithms;Algorithm design and analysis;Fingerprints;interval analysis;localization;mobility;wireless sensor networks},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922343.pdf},\n}\n\n
\n
\n\n\n
\n This paper is focused on the decentralized localization problem of mobile sensors in wireless sensor networks. Based on a combined localization technique, it uses accelerometer, gyroscope and fingerprinting information to solve the positioning issue. Using the sensors mobility, the proposed method computes first estimates of sensors positions. It then proceeds to a decentralized localization scheme, where the network is divided to different zones. RSSIs fingerprints are jointly used with mobility information in order to compute position estimates. Final position estimates are obtained by means of interval analysis where all uncertainties are considered throughout the estimation process.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reconstruction technique of fluorescent x-ray computed tomography using sheet beam.\n \n \n \n \n\n\n \n Nakamura, S.; Huo, Q.; and Yuasa, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1975-1979, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ReconstructionPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952735,\n  author = {S. Nakamura and Q. Huo and T. Yuasa},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Reconstruction technique of fluorescent x-ray computed tomography using sheet beam},\n  year = {2014},\n  pages = {1975-1979},\n  abstract = {We clarify the measurement process of fluorescent x-ray computed tomography (FXCT) using sheet-beam as incident beam, and show that the process leads to the attenuated Radon transform. In order to improve quantitativeness, we apply Natterer's scheme to the FXCT reconstruction. We show its efficacy by computer simulation.},\n  keywords = {computerised tomography;Radon transforms;X-ray microscopy;fluorescent X-ray computed tomography;sheet beam;incident beam;Radon transform;Natterer's scheme;computer simulation;Fluorescence;X-ray imaging;Computed tomography;Detectors;Image reconstruction;Attenuation;Fluorescent x-ray;computed tomography;quantitativeness;reconstruction;attenuated Radon transform},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923915.pdf},\n}\n\n
\n
\n\n\n
\n We clarify the measurement process of fluorescent x-ray computed tomography (FXCT) using sheet-beam as incident beam, and show that the process leads to the attenuated Radon transform. In order to improve quantitativeness, we apply Natterer's scheme to the FXCT reconstruction. We show its efficacy by computer simulation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Electromyogram signal enhancement in FMRI noise using spectral subtraction.\n \n \n \n \n\n\n \n Ben Jebara, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1980-1984, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ElectromyogramPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952736,\n  author = {S. {Ben Jebara}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Electromyogram signal enhancement in FMRI noise using spectral subtraction},\n  year = {2014},\n  pages = {1980-1984},\n  abstract = {This paper deals with noise removal in ElectroMyoGram (EMG) signals acquired in the hostile noisy environment of functional Magnetic Resonance Imaging (fMRI). The noise due to magnetic fields and radio frequencies corrupts significantly the EMG signal which render its extraction very difficult. The proposed approach operes in the frequency domain to estimate the noise spectrum to subtract it from noisy observation spectrum. The noise estimation is based on spectral minima tracking in each frequency bin without any distinction between muscle activity and muscle rest. But it looks for connected time-frequency regions of muscle activity presence to estimate a bias compensation factor. The method is tested with a simulated noisy observation in order to evaluate its performance using objective criteria. It is also validated for real noisy observations where no clean is available.},\n  keywords = {biomedical MRI;electromyography;magnetic fields;medical signal processing;time-frequency analysis;electromyogram signal enhancement;FMRI noise;spectral subtraction;noise removal;EMG signals;noisy environment;functional magnetic resonance imaging;magnetic fields;radio frequencies;frequency domain analysis;noise estimation;spectral minima tracking;muscle activity;time-frequency regions;Noise;Electromyography;Muscles;Noise measurement;Noise reduction;Estimation;Magnetic resonance imaging;fMRI noise;EMG signal;denoising;spectral subtraction;noise spectrum estimation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923957.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with noise removal in ElectroMyoGram (EMG) signals acquired in the hostile noisy environment of functional Magnetic Resonance Imaging (fMRI). The noise due to magnetic fields and radio frequencies corrupts significantly the EMG signal which render its extraction very difficult. The proposed approach operes in the frequency domain to estimate the noise spectrum to subtract it from noisy observation spectrum. The noise estimation is based on spectral minima tracking in each frequency bin without any distinction between muscle activity and muscle rest. But it looks for connected time-frequency regions of muscle activity presence to estimate a bias compensation factor. The method is tested with a simulated noisy observation in order to evaluate its performance using objective criteria. It is also validated for real noisy observations where no clean is available.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Gunshot signal enhancement for DOA estimation andweapon recognition.\n \n \n \n\n\n \n Borzino, Â. M. C. R.; Apolinário, J. A.; de Campos , M. L. R.; and Pagliari, C. L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1985-1989, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952737,\n  author = {Â. M. C. R. Borzino and J. A. Apolinário and M. L. R. {de Campos} and C. L. Pagliari},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Gunshot signal enhancement for DOA estimation andweapon recognition},\n  year = {2014},\n  pages = {1985-1989},\n  abstract = {This paper proposes a deconvolution technique for gunshot signals aiming at improving direction of arrival estimation and weapon recognition. When dealing with field recorded signals, reflections degrade the performance of these tasks and a signal enhancement technique is required. Our scheme improves a gunshot signal by delaying and summing its reflections. Conventional blind deconvolution schemes are not reliable when applied to impulsive signals. While other techniques impose restrictions on the signal in order to ensure stability, the one presented herein can be used without such limitations. The results of the proposed technique were tested with real gunshot signals and both applications performed well.},\n  keywords = {acoustic signal processing;deconvolution;direction-of-arrival estimation;object recognition;weapons;blind deconvolution schemes;field recorded signals;direction of arrival estimation;deconvolution technique;weapon recognition;DOA estimation;gunshot signal enhancement;Deconvolution;Correlation;Weapons;Direction-of-arrival estimation;Microphones;Estimation;Arrays;Signal deconvolution;gunshot signal;direction of arrival estimation;weapon recognition},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n This paper proposes a deconvolution technique for gunshot signals aiming at improving direction of arrival estimation and weapon recognition. When dealing with field recorded signals, reflections degrade the performance of these tasks and a signal enhancement technique is required. Our scheme improves a gunshot signal by delaying and summing its reflections. Conventional blind deconvolution schemes are not reliable when applied to impulsive signals. While other techniques impose restrictions on the signal in order to ensure stability, the one presented herein can be used without such limitations. The results of the proposed technique were tested with real gunshot signals and both applications performed well.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic generation of personalised alert thresholds for patients with COPD.\n \n \n \n \n\n\n \n Velardo, C.; Shah, S. A.; Gibson, O.; Rutter, H.; Farmer, A.; and Tarassenko, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1990-1994, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952738,\n  author = {C. Velardo and S. A. Shah and O. Gibson and H. Rutter and A. Farmer and L. Tarassenko},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic generation of personalised alert thresholds for patients with COPD},\n  year = {2014},\n  pages = {1990-1994},\n  abstract = {Chronic Obstructive Pulmonary Disease (COPD) is a chronic disease predicted to become the third leading cause of death by 2030. Patients with COPD are at risk of exacerbations in their symptoms, which have an adverse effect on their quality of life and may require emergency hospital admission. Using the results of a pilot study of an m-Health system for COPD self-management and tele-monitoring, we demonstrate a data-driven approach for computing personalised alert thresholds to prioritise patients for clinical review. Univariate and multivariate methodologies are used to analyse and fuse daily symptom scores, heart rate, and oxygen saturation measurements. We discuss the benefits of a multivariate kernel density estimator which improves on univariate approaches.},\n  keywords = {diseases;hospitals;patient monitoring;automatic generation;personalised alert thresholds;patients;chronic obstructive pulmonary disease;COPD;exacerbation risk;emergency hospital admission;m-health system;self-management;tele-monitoring;clinical review;univariate methodology;multivariate methodology;daily symptom scores;heart rate;oxygen saturation measurements;multivariate kernel density estimator;Diseases;Heart rate;Algorithm design and analysis;Training;Monitoring;Training data;m-Health;novelty detection;COPD;chronic diseases;digital health},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924477.pdf},\n}\n\n
\n
\n\n\n
\n Chronic Obstructive Pulmonary Disease (COPD) is a chronic disease predicted to become the third leading cause of death by 2030. Patients with COPD are at risk of exacerbations in their symptoms, which have an adverse effect on their quality of life and may require emergency hospital admission. Using the results of a pilot study of an m-Health system for COPD self-management and tele-monitoring, we demonstrate a data-driven approach for computing personalised alert thresholds to prioritise patients for clinical review. Univariate and multivariate methodologies are used to analyse and fuse daily symptom scores, heart rate, and oxygen saturation measurements. We discuss the benefits of a multivariate kernel density estimator which improves on univariate approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cochlear implant artifact rejection in electrically evoked auditory steady state responses.\n \n \n \n \n\n\n \n Deprez, H.; Hofmann, M.; van Wieringen , A.; Wouters, J.; and Moonen, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1995-1999, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CochlearPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952739,\n  author = {H. Deprez and M. Hofmann and A. {van Wieringen} and J. Wouters and M. Moonen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Cochlear implant artifact rejection in electrically evoked auditory steady state responses},\n  year = {2014},\n  pages = {1995-1999},\n  abstract = {Electrically evoked auditory steady state responses (EASSRs) are EEG signals measured in response to periodic or modulated pulse trains presented through a cochlear implant (CI). EASSRs are studied for the objective fitting of CIs in infants, as electrophysiological thresholds determined with EASSRs correlate well with behavioural thresholds. Currently available techniques to remove CI artifacts from such measurements are only able to deal with artifacts for low-rate pulse trains or modulated pulse trains presented in bipolar mode, which are not used in main clinical practice. In this paper, an automatic EASSR CI artifact rejection technique based on independent component analysis (ICA) is presented that is suitable for clinical parameters. Artifactual independent components are selected based on the spectral amplitude of the pulse rate. Electrophysiological thresholds determined based on ICA compensated signals are equal to those detected using blanked signals, but measurements at only one modulation frequency are required.},\n  keywords = {auditory evoked potentials;bioelectric phenomena;cochlear implants;electroencephalography;independent component analysis;modulation frequency;blanked signals;compensated signals;spectral amplitude;artifactual independent components;ICA;independent component analysis;clinical practice;bipolar mode;low-rate pulse trains;behavioural thresholds;electrophysiological thresholds;infants;objective fitting;modulated pulse trains;periodic pulse trains;EEG signals;EASSR;electrically evoked auditory steady state responses;cochlear implant artifact rejection;Electrodes;Frequency modulation;Cochlear implants;Delays;Blanking;Steady-state;CI artifact;EASSR;objective;automatic;ICA},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924529.pdf},\n}\n\n
\n
\n\n\n
\n Electrically evoked auditory steady state responses (EASSRs) are EEG signals measured in response to periodic or modulated pulse trains presented through a cochlear implant (CI). EASSRs are studied for the objective fitting of CIs in infants, as electrophysiological thresholds determined with EASSRs correlate well with behavioural thresholds. Currently available techniques to remove CI artifacts from such measurements are only able to deal with artifacts for low-rate pulse trains or modulated pulse trains presented in bipolar mode, which are not used in main clinical practice. In this paper, an automatic EASSR CI artifact rejection technique based on independent component analysis (ICA) is presented that is suitable for clinical parameters. Artifactual independent components are selected based on the spectral amplitude of the pulse rate. Electrophysiological thresholds determined based on ICA compensated signals are equal to those detected using blanked signals, but measurements at only one modulation frequency are required.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Pedaling parameters behavior on healthy subjects: Towards a rehabilitation indication.\n \n \n \n \n\n\n \n Barbosa, D.; Martins, M.; Santos, C. P.; Costa, L.; Pereira, A.; and Seabra, E.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2000-2004, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PedalingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952740,\n  author = {D. Barbosa and M. Martins and C. P. Santos and L. Costa and A. Pereira and E. Seabra},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Pedaling parameters behavior on healthy subjects: Towards a rehabilitation indication},\n  year = {2014},\n  pages = {2000-2004},\n  abstract = {It is of outmost importance to identify the quantitative indicators that characterize the rehabilitation degree of the lower limbs of stroke patients and qualitative indicators of the quality of the movement. As a first step in this direction, a cycling ergometer, used in hospitals and rehabilitation clinics, was modified to provide informations about the force applied in the pedals and the pedal angles. One group of non-pathological subjects performed a set of trials at different workloads and cadence values, to analyze the effect of these variables on force output. An increased workload resulted in the raise of the work performed by each leg, whereas the cadence results were inconclusive. Results suggest that the variation of the workload may be a suitable method to characterize motor impairments.},\n  keywords = {behavioural sciences computing;ergonomics;hospitals;patient rehabilitation;user interfaces;pedaling parameters behavior;healthy subjects;rehabilitation indication;quantitative indicator identification;stroke patient lower limbs rehabilitation degree characterization;cycling ergometer;hospitals;rehabilitation clinics;pedal angles;pedals;nonpathological subjects;cadence values;workloads;motor impairment characterization;Force;Legged locomotion;Market research;Force measurement;Biomechanics;Hospitals;Cycling;hospital;force;rehabilitation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924719.pdf},\n}\n\n
\n
\n\n\n
\n It is of outmost importance to identify the quantitative indicators that characterize the rehabilitation degree of the lower limbs of stroke patients and qualitative indicators of the quality of the movement. As a first step in this direction, a cycling ergometer, used in hospitals and rehabilitation clinics, was modified to provide informations about the force applied in the pedals and the pedal angles. One group of non-pathological subjects performed a set of trials at different workloads and cadence values, to analyze the effect of these variables on force output. An increased workload resulted in the raise of the work performed by each leg, whereas the cadence results were inconclusive. Results suggest that the variation of the workload may be a suitable method to characterize motor impairments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Metric learning for event-related potential component classification in EEG signals.\n \n \n \n \n\n\n \n Liu, Q.; Zhao, X.; and Hou, Z.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2005-2009, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"MetricPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952741,\n  author = {Q. Liu and X. Zhao and Z. Hou},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Metric learning for event-related potential component classification in EEG signals},\n  year = {2014},\n  pages = {2005-2009},\n  abstract = {In this paper, we introduce a metric learning approach for the classification process in the recognition procedure for P300 waves in electroencephalographic (EEG) signals. We show that the accuracy of support machine vector (SVM) classification is significantly improved by learning a similarity metric from the training data instead of using the default Euclidean metric. The effectiveness of the algorithm is validated through experiments on the dataset II of the brain-computer interface (BCI) Competition III(P300 speller).},\n  keywords = {electroencephalography;learning (artificial intelligence);medical signal processing;signal classification;support vector machines;metric learning approach;event-related potential component classification;EEG signals;electroencephalographic signals;support machine vector classification;SVM classification;brain-computer interface;BCI;P300 waves;Electroencephalography;Support vector machines;Classification algorithms;Feature extraction;Wavelet packets;Kernel;Metric learning;SVM;P300},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924785.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we introduce a metric learning approach for the classification process in the recognition procedure for P300 waves in electroencephalographic (EEG) signals. We show that the accuracy of support machine vector (SVM) classification is significantly improved by learning a similarity metric from the training data instead of using the default Euclidean metric. The effectiveness of the algorithm is validated through experiments on the dataset II of the brain-computer interface (BCI) Competition III(P300 speller).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A discriminative approach to automatic seizure detection in multichannel EEG signals.\n \n \n \n \n\n\n \n James, D.; Xie, X.; and Eslambolchilar, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2010-2014, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952742,\n  author = {D. James and X. Xie and P. Eslambolchilar},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A discriminative approach to automatic seizure detection in multichannel EEG signals},\n  year = {2014},\n  pages = {2010-2014},\n  abstract = {The aim of this paper is to introduce the application of Random Forests to the automated analysis of epileptic EEG data. Feature extraction is performed using a discrete wavelet transform to give time-frequency representations, from which statistical features based on the wavelet decompositions are formed and used for training and classification. We show that Random Forests can be used for the classification of ictal, inter-ictal and healthy EEG with a high level of accuracy, with 99% sensitivity and 93.5% specificity for classifying ictal and inter-ictal EEG, 90.6% sensitivity and 95.7% specificity for the windowed data and 93.9% sensitivity for seizure onset classification.},\n  keywords = {electroencephalography;feature extraction;medical signal processing;seizure onset classification;inter-ictal EEG;wavelet decompositions;statistical features;time frequency representations;discrete wavelet transform;feature extraction;epileptic EEG data;automated analysis;random forests;multichannel EEG signals;automatic seizure detection;Electroencephalography;Feature extraction;Sensitivity;Vectors;Accuracy;Radio frequency;Training},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925083.pdf},\n}\n\n
\n
\n\n\n
\n The aim of this paper is to introduce the application of Random Forests to the automated analysis of epileptic EEG data. Feature extraction is performed using a discrete wavelet transform to give time-frequency representations, from which statistical features based on the wavelet decompositions are formed and used for training and classification. We show that Random Forests can be used for the classification of ictal, inter-ictal and healthy EEG with a high level of accuracy, with 99% sensitivity and 93.5% specificity for classifying ictal and inter-ictal EEG, 90.6% sensitivity and 95.7% specificity for the windowed data and 93.9% sensitivity for seizure onset classification.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Stockwell transform optimization applied on the detection of split in heart sounds.\n \n \n \n\n\n \n Moukadem, A.; Bouguila, Z.; Abdeslam, D. O.; and Dieterlen, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2015-2019, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952743,\n  author = {A. Moukadem and Z. Bouguila and D. O. Abdeslam and A. Dieterlen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Stockwell transform optimization applied on the detection of split in heart sounds},\n  year = {2014},\n  pages = {2015-2019},\n  abstract = {The aim of this paper is to improve the energy concentration of the Stockwell transform (S-transform) in the time-frequency domain. Several methods proposed in the literature tried to introduce novel parameters to control the width of the Gaussian window in the S-transform. In this study, a modified S-transform is proposed with four parameters to control the Gaussian window width. A genetic algorithm is applied to select the optimal parameters which maximize the energy concentration measure. An application presented in this paper consists to detect split in heart sounds and calculate its duration which is valuable medical information. Comparison with other famous time-frequency transforms such as Short-time Fourier transforms (STFT) and smoothed-pseudo Wigner-Ville distribution (SPWVD) is performed and discussed.},\n  keywords = {cardiology;Fourier transforms;genetic algorithms;medical signal detection;time-frequency analysis;Wigner distribution;Stockwell transform optimization;split detection;heart sounds;time-frequency domain;modified S-transform;Gaussian window width;genetic algorithm;energy concentration measure maximization;medical information;time-frequency transforms;short-time Fourier transforms;STFT;smoothed-pseudo Wigner-Ville distribution;SPWVD;Heart;Time-frequency analysis;Transforms;Genetic algorithms;Signal processing;Optimization;Energy measurement;Stockwell transform;energy concentration;genetic algorithm;heart sounds;valvular split},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The aim of this paper is to improve the energy concentration of the Stockwell transform (S-transform) in the time-frequency domain. Several methods proposed in the literature tried to introduce novel parameters to control the width of the Gaussian window in the S-transform. In this study, a modified S-transform is proposed with four parameters to control the Gaussian window width. A genetic algorithm is applied to select the optimal parameters which maximize the energy concentration measure. An application presented in this paper consists to detect split in heart sounds and calculate its duration which is valuable medical information. Comparison with other famous time-frequency transforms such as Short-time Fourier transforms (STFT) and smoothed-pseudo Wigner-Ville distribution (SPWVD) is performed and discussed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Vessel centerline detection in retinal images based on a corner detector and dynamic thresholding.\n \n \n \n \n\n\n \n Soares, I.; Castelo-Branco, M.; and Pinheiro, A. M. G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2020-2024, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"VesselPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952744,\n  author = {I. Soares and M. Castelo-Branco and A. M. G. Pinheiro},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Vessel centerline detection in retinal images based on a corner detector and dynamic thresholding},\n  year = {2014},\n  pages = {2020-2024},\n  abstract = {This paper describes a new method for the calculation of the retinal vessel centerlines using a scale-space approach for an increased reliability and effectiveness. The algorithm begins with a new vessel detector description method based on a modified corner detector. Then the vessel detector image is filtered with a set of binary rotating filters, resulting in enhanced vessels structures. The main vessels can be selected with a dynamic thresholding approach. In order to deal with vessels bifurcations and vessels crossovers that might not be detected, the initial retinal image is processed with a set of four directional differential operators. The resulting directional images are then combined with the detected vessels, creating the final vessels centerlines image. The performance of the algorithm is evaluated using two different datasets.},\n  keywords = {retinal recognition;vessel centerline detection;retinal images;dynamic thresholding;scale-space approach;vessel detector description method;modified corner detector;vessel detector image;binary rotating filters;enhanced vessel structure;dynamic thresholding approach;vessel bifurcations;vessel crossovers;directional differential operators;directional images;Image segmentation;Biomedical imaging;Detectors;Retinal vessels;Kernel;Bifurcation;Vessel centerline;scale-space;Retina},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926009.pdf},\n}\n\n
\n
\n\n\n
\n This paper describes a new method for the calculation of the retinal vessel centerlines using a scale-space approach for an increased reliability and effectiveness. The algorithm begins with a new vessel detector description method based on a modified corner detector. Then the vessel detector image is filtered with a set of binary rotating filters, resulting in enhanced vessels structures. The main vessels can be selected with a dynamic thresholding approach. In order to deal with vessels bifurcations and vessels crossovers that might not be detected, the initial retinal image is processed with a set of four directional differential operators. The resulting directional images are then combined with the detected vessels, creating the final vessels centerlines image. The performance of the algorithm is evaluated using two different datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n VOG-enhanced ICA for SSVEP response detection from consumer-grade EEG.\n \n \n \n \n\n\n \n Samadi, M. R. H.; and Cooke, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2025-2029, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"VOG-enhancedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952745,\n  author = {M. R. H. Samadi and N. Cooke},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {VOG-enhanced ICA for SSVEP response detection from consumer-grade EEG},\n  year = {2014},\n  pages = {2025-2029},\n  abstract = {The steady-state visual evoked potential (SSVEP) brain-computer interface (BCI) paradigm detects when users look at flashing static and dynamic visual stimuli. Electroculogram (EOG) artefacts in the electroencephalography (EEG) signal limit the application for dynamic stimuli because they elicit smooth pursuit eye movement. We propose `VOG-ICA' - an EOG artefact rejection technique based on Independent Component Analysis (ICA) that uses video-oculography (VOG) information from an eye tracker. It demonstrates good performance compared to Plöchl when evaluated on matched and EEG data collected with consumer grade eye tracking and wireless cap EEG apparatus. SSVEP response detection from frequential features extracted from ICA components demonstrates higher SSVEP response detection accuracy and lower between-person variation compared with extracted features from raw and post-ICA reconstructed `clean' EEG. The work highlights the requirement for robust EEG artefact and SSVEP response detection techniques for consumer-grade multimodal apparatus.},\n  keywords = {biomechanics;brain-computer interfaces;electroencephalography;electro-oculography;feature extraction;independent component analysis;medical signal detection;medical signal processing;neurophysiology;visual evoked potentials;VOG-enhanced ICA;steady-state visual evoked potential;SSVEP brain-computer interface;flashing static visual stimuli;flashing dynamic visual stimuli;electroculogram artefacts;electroencephalography;EEG signal limit;eye movement;EOG artefact rejection technique;independent component analysis;video-oculography information;eye tracker;wireless cap EEG apparatus;frequential feature extraction;SSVEP response detection accuracy;Electroencephalography;Electrooculography;Feature extraction;Visualization;Accuracy;Integrated circuits;Steady-state;ICA;SSVEP;EEG;Artefact Rejection;VOG},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926491.pdf},\n}\n\n
\n
\n\n\n
\n The steady-state visual evoked potential (SSVEP) brain-computer interface (BCI) paradigm detects when users look at flashing static and dynamic visual stimuli. Electroculogram (EOG) artefacts in the electroencephalography (EEG) signal limit the application for dynamic stimuli because they elicit smooth pursuit eye movement. We propose `VOG-ICA' - an EOG artefact rejection technique based on Independent Component Analysis (ICA) that uses video-oculography (VOG) information from an eye tracker. It demonstrates good performance compared to Plöchl when evaluated on matched and EEG data collected with consumer grade eye tracking and wireless cap EEG apparatus. SSVEP response detection from frequential features extracted from ICA components demonstrates higher SSVEP response detection accuracy and lower between-person variation compared with extracted features from raw and post-ICA reconstructed `clean' EEG. The work highlights the requirement for robust EEG artefact and SSVEP response detection techniques for consumer-grade multimodal apparatus.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n EEG signal processing for eye tracking.\n \n \n \n \n\n\n \n Haji Samadi, M. R.; and Cooke, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2030-2034, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EEGPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952746,\n  author = {M. R. {Haji Samadi} and N. Cooke},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {EEG signal processing for eye tracking},\n  year = {2014},\n  pages = {2030-2034},\n  abstract = {Head-mounted Video-Oculography (VOG) eye tracking is visually intrusive due to a camera in the peripheral view. Electrooculography (EOG) eye tracking is socially intrusive because of face-mounted electrodes. In this work we explore Electroencephalography (EEG) eye tracking from less intrusive wireless cap scalp-based electrodes. Classification algorithms to detect eye movement and the focus of foveal attention are proposed and evaluated on data from a matched dataset of VOG and 16-channel EEG. The algorithms utilise EOG artefacts and the brain's steady state visually evoked potential (SSVEP) response while viewing flickering stimulus. We demonstrate improved performance by extracting features from source signals estimated by Independent Component Analysis (ICA) rather than the traditional band-pass preprocessed EEG channels. The work envisages eye tracking technologies that utilise non-facially intrusive EEG brain sensing via wireless dry contact scalp based electrodes.},\n  keywords = {biomechanics;biomedical electrodes;electroencephalography;electro-oculography;feature extraction;independent component analysis;medical signal processing;neurophysiology;signal classification;visual evoked potentials;EEG signal processing;head-mounted video-oculography eye tracking;camera;peripheral view;electrooculography;EOG eye tracking;electroencephalography;EEG eye tracking;wireless cap scalp-based electrodes;classification algorithms;eye movement detection;foveal attention;steady state visually evoked potential;brain SSVEP response;feature extraction;independent component analysis;band-pass preprocessed EEG channels;nonfacially intrusive EEG brain sensing;Electroencephalography;Feature extraction;Tracking;Visualization;Electrooculography;Electrodes;Accuracy;ICA;SSVEP;VOG;eye tracking;visual attention},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926535.pdf},\n}\n\n
\n
\n\n\n
\n Head-mounted Video-Oculography (VOG) eye tracking is visually intrusive due to a camera in the peripheral view. Electrooculography (EOG) eye tracking is socially intrusive because of face-mounted electrodes. In this work we explore Electroencephalography (EEG) eye tracking from less intrusive wireless cap scalp-based electrodes. Classification algorithms to detect eye movement and the focus of foveal attention are proposed and evaluated on data from a matched dataset of VOG and 16-channel EEG. The algorithms utilise EOG artefacts and the brain's steady state visually evoked potential (SSVEP) response while viewing flickering stimulus. We demonstrate improved performance by extracting features from source signals estimated by Independent Component Analysis (ICA) rather than the traditional band-pass preprocessed EEG channels. The work envisages eye tracking technologies that utilise non-facially intrusive EEG brain sensing via wireless dry contact scalp based electrodes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Processing of laser speckle contrast images to analyze the impact of aging on moving blood cells when a Lorentzian velocity profile is assumed.\n \n \n \n\n\n \n Khalil, A.; Humeau-Heurtier, A.; Abraham, P.; and Mahé, G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2035-2039, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952747,\n  author = {A. Khalil and A. Humeau-Heurtier and P. Abraham and G. Mahé},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Processing of laser speckle contrast images to analyze the impact of aging on moving blood cells when a Lorentzian velocity profile is assumed},\n  year = {2014},\n  pages = {2035-2039},\n  abstract = {It has long been recognized that age alters microcirculation. The follow-up of such alterations can be performed by monitoring microvascular blood flow. Laser speckle contrast imaging (LSCI) has recently been commercialized to monitor microvascular blood flow. From laser speckle contrast images, velocity of microvascular moving scatterers (mainly red blood cells) can be computed when a profile for velocity distribution is assumed. Our goal herein is to analyze if alterations of microcirculation with age can be determined by processing experimental LSCI data. In our work a Lorentzian velocity profile is assumed and the presence of static scatterers, like skin, is taken into account. Our results show that moving scatterers velocities computed from LSCI data vary with age: blood cells velocities increase with age. Moreover, the more the static scatterers, the higher the moving scatterers velocity values. Our findings are a first step in the analysis of the impact of aging from the processing of laser speckle contrast images.},\n  keywords = {haemodynamics;haemorheology;image processing;medical signal processing;laser speckle contrast images processing;Lorentzian velocity;microvascular blood flow;laser speckle contrast imaging;microvascular moving scatterers;red blood cells;static scatterers;Speckle;Senior citizens;Blood;Lasers;Imaging;Skin;Cells (biology);Laser speckle contrast imaging;Image processing;Blood flow;Lorentzian profile},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n It has long been recognized that age alters microcirculation. The follow-up of such alterations can be performed by monitoring microvascular blood flow. Laser speckle contrast imaging (LSCI) has recently been commercialized to monitor microvascular blood flow. From laser speckle contrast images, velocity of microvascular moving scatterers (mainly red blood cells) can be computed when a profile for velocity distribution is assumed. Our goal herein is to analyze if alterations of microcirculation with age can be determined by processing experimental LSCI data. In our work a Lorentzian velocity profile is assumed and the presence of static scatterers, like skin, is taken into account. Our results show that moving scatterers velocities computed from LSCI data vary with age: blood cells velocities increase with age. Moreover, the more the static scatterers, the higher the moving scatterers velocity values. Our findings are a first step in the analysis of the impact of aging from the processing of laser speckle contrast images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-complexity, multi-channel, lossless and near-lossless EEG compression.\n \n \n \n \n\n\n \n Capurro, I.; Lecumberry, F.; Martín, Á.; Ramírez, I.; Rovira, E.; and Seroussi, G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2040-2044, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Low-complexity,Paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952748,\n  author = {I. Capurro and F. Lecumberry and Á. Martín and I. Ramírez and E. Rovira and G. Seroussi},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Low-complexity, multi-channel, lossless and near-lossless EEG compression},\n  year = {2014},\n  pages = {2040-2044},\n  abstract = {Current EEG applications imply the need for low-latency, low-power, high-fidelity data transmission and storage algorithms. This work proposes a compression algorithm meeting these requirements through the use of modern information theory and signal processing tools (such as universal coding, universal prediction, and fast online implementations of multivariate recursive least squares), combined with simple methods to exploit spatial as well as temporal redundancies typically present in EEG signals. The resulting compression algorithm requires O(1) operations per scalar sample and surpasses the current state of the art in near-lossless and lossless EEG compression ratios.},\n  keywords = {electroencephalography;encoding;least squares approximations;medical signal processing;low-complexity EEG compression;multichannel EEG compression;near-lossless EEG compression;data transmission;storage algorithms;compression algorithm;information theory;signal processing tools;universal coding;universal prediction;fast online implementations;multivariate recursive least squares;temporal redundancy;EEG signals;Electroencephalography;Brain modeling;Predictive models;Databases;Encoding;Image coding;Prediction algorithms;EEG compression;lossless compression;near-lossless compression;low-complexity},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923817.pdf},\n}\n\n
\n
\n\n\n
\n Current EEG applications imply the need for low-latency, low-power, high-fidelity data transmission and storage algorithms. This work proposes a compression algorithm meeting these requirements through the use of modern information theory and signal processing tools (such as universal coding, universal prediction, and fast online implementations of multivariate recursive least squares), combined with simple methods to exploit spatial as well as temporal redundancies typically present in EEG signals. The resulting compression algorithm requires O(1) operations per scalar sample and surpasses the current state of the art in near-lossless and lossless EEG compression ratios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n RFID-based butterfly location sensing system.\n \n \n \n \n\n\n \n Särkkä, S.; Viikari, V.; and Jaakkola, K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2045-2049, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RFID-basedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952749,\n  author = {S. Särkkä and V. Viikari and K. Jaakkola},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {RFID-based butterfly location sensing system},\n  year = {2014},\n  pages = {2045-2049},\n  abstract = {In this paper, we describe the implementation of an RFID-based location sensing system which is intended for monitoring the movement and activity of butterflies for biological research purposes. We present the design and characteristics of the developed RFID tags, the antenna and system configuration, as well as the non-linear Kalman filtering and smoothing based tracking algorithm. We also present experimental results obtained in an anechoic chamber as well as in field test environment.},\n  keywords = {antennas;biological techniques;Kalman filters;nonlinear filters;radiofrequency identification;radiotelemetry;smoothing methods;RFID-based butterfly location sensing system;biological research;RFID tags;antenna;system configuration;nonlinear Kalman filtering;smoothing based tracking algorithm;anechoic chamber;field test environment;Radiofrequency identification;Antennas;Radar tracking;Antenna measurements;Trajectory;RFID;UHF;location sensing;extended Kalman filter;RTS smoother;butterfly},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569920731.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we describe the implementation of an RFID-based location sensing system which is intended for monitoring the movement and activity of butterflies for biological research purposes. We present the design and characteristics of the developed RFID tags, the antenna and system configuration, as well as the non-linear Kalman filtering and smoothing based tracking algorithm. We also present experimental results obtained in an anechoic chamber as well as in field test environment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A computationally-efficient single-channel speech enhancement algorithm for monaural hearing aids.\n \n \n \n\n\n \n Ayllón, D.; Gil-Pita, R.; Utrilla-Manso, M.; and Rosa-Zurera, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2050-2054, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952750,\n  author = {D. Ayllón and R. Gil-Pita and M. Utrilla-Manso and M. Rosa-Zurera},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A computationally-efficient single-channel speech enhancement algorithm for monaural hearing aids},\n  year = {2014},\n  pages = {2050-2054},\n  abstract = {A computationally-efficient single-channel speech enhancement algorithm to improve intelligibility in monaural hearing aids is presented in this paper. The algorithm combines a novel set of features with a simple supervised machine learning technique to estimate the frequency-domain Wiener filter for noise reduction, using extremely low computational resources. Results show a noticeable intelligibility improvement in terms of PESQ score and SNRESI, even for low input SNR, using only a 7% of the computational resources available in a state-of-the-art commercial hearing aid. The performance of the algorithm is comparable to the performance of current algorithms that use more computationally complex features and learning schemas.},\n  keywords = {computational complexity;hearing aids;learning (artificial intelligence);speech enhancement;speech intelligibility;Wiener filters;computationally efficient single channel speech enhancement algorithm;monaural hearing aids;supervised machine learning technique;frequency domain Wiener filter;noise reduction;intelligibility improvement;Speech;Speech enhancement;Noise measurement;Signal processing algorithms;Signal to noise ratio;Training;Speech enhancement;Noise reduction;Time-frequency masking;Supervised learning},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n A computationally-efficient single-channel speech enhancement algorithm to improve intelligibility in monaural hearing aids is presented in this paper. The algorithm combines a novel set of features with a simple supervised machine learning technique to estimate the frequency-domain Wiener filter for noise reduction, using extremely low computational resources. Results show a noticeable intelligibility improvement in terms of PESQ score and SNRESI, even for low input SNR, using only a 7% of the computational resources available in a state-of-the-art commercial hearing aid. The performance of the algorithm is comparable to the performance of current algorithms that use more computationally complex features and learning schemas.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Binaural localization of speech sources in the median plane using cepstral hrtf extraction.\n \n \n \n \n\n\n \n Talagala, D. S.; Wu, X.; Zhang, W.; and Abhayapala, T. D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2055-2059, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"BinauralPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952751,\n  author = {D. S. Talagala and X. Wu and W. Zhang and T. D. Abhayapala},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Binaural localization of speech sources in the median plane using cepstral hrtf extraction},\n  year = {2014},\n  pages = {2055-2059},\n  abstract = {In binaural systems, source localization in the median plane is challenging due to the difficulty of exploring the spectral cues of the head-related transfer function (HRTF) independently of the source spectra. This paper presents a method of extracting the HRTF spectral cues using cepstral analysis for speech source localization in the median plane. Binaural signals are preprocessed in the cepstral domain so that the fine spectral structure of speech and the HRTF spectral envelope can be easily separated. We introduce (i) a truncated cepstral transformation to extract the relevant localization cues, and (ii) a mechanism to normalize the effects of the time varying speech spectra. The proposed method is evaluated and compared with a convolution based localization method using a speech corpus of multiple speakers. The results suggest that the proposed method fully exploits the available spectral cues for robust speaker independent binaural source localization in the median plane.},\n  keywords = {acoustic waves;cepstral analysis;speech processing;cepstral HRTF extraction;binaural systems;median plane;head-related transfer function;HRTF spectral cues;cepstral analysis;speech source localization;cepstral domain;binaural signals;HRTF spectral envelope;truncated cepstral transformation;time varying speech spectra;convolution based localization method;speech corpus;multiple speakers;binaural source localization;Speech;Cepstral analysis;Position measurement;Signal to noise ratio;Convolution;Correlation;Robustness;Binaural localization;cepstral transformation;head related transfer function (HRTF);median plane},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924863.pdf},\n}\n\n
\n
\n\n\n
\n In binaural systems, source localization in the median plane is challenging due to the difficulty of exploring the spectral cues of the head-related transfer function (HRTF) independently of the source spectra. This paper presents a method of extracting the HRTF spectral cues using cepstral analysis for speech source localization in the median plane. Binaural signals are preprocessed in the cepstral domain so that the fine spectral structure of speech and the HRTF spectral envelope can be easily separated. We introduce (i) a truncated cepstral transformation to extract the relevant localization cues, and (ii) a mechanism to normalize the effects of the time varying speech spectra. The proposed method is evaluated and compared with a convolution based localization method using a speech corpus of multiple speakers. The results suggest that the proposed method fully exploits the available spectral cues for robust speaker independent binaural source localization in the median plane.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Time-frequency reassigned cepstral coefficients for phone-level speech segmentation.\n \n \n \n \n\n\n \n Tryfou, G.; Pellin, M.; and Omologo, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2060-2064, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Time-frequencyPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952752,\n  author = {G. Tryfou and M. Pellin and M. Omologo},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Time-frequency reassigned cepstral coefficients for phone-level speech segmentation},\n  year = {2014},\n  pages = {2060-2064},\n  abstract = {This paper studies feature extraction within the context of automatic speech segmentation at phonetic level. Current state-of-the-art solutions widely use cepstral features as a front-end for HMM based frameworks. Although the automatic segmentation results have reached the inter-annotator agreement, within a tolerance equal or higher than 20ms, the same is not true when a lower tolerance is considered. We propose a new set of cepstral features that derive from the time-frequency reassigned spectrogram and offer a sharper representation of the speech signal in the cepstral domain. The features are evaluated through a series of forced alignment experiments which demonstrate a better performance, compared to the traditional MFCC features, in aligning phone boundaries within a small distance from their true position.},\n  keywords = {cepstral analysis;feature extraction;hidden Markov models;mobile handsets;signal representation;speech processing;speech recognition;time-frequency analysis;tolerance analysis;time-frequency reassigned cepstral coefficient;phone level automatic speech segmentation;hidden Markov model;phonetic level;cepstral feature extraction;HMM;interannotator agreement;time-frequency reassigned spectrogram;speech signal sharper representation;cepstral domain;phone boundary alignment;speech recognition;Mel frequency cepstral coefficient;Speech;Time-frequency analysis;Hidden Markov models;Feature extraction;feature extraction;reassigned spectrogram;phonetic segmentation;forced alignment;HMM},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923161.pdf},\n}\n\n
\n
\n\n\n
\n This paper studies feature extraction within the context of automatic speech segmentation at phonetic level. Current state-of-the-art solutions widely use cepstral features as a front-end for HMM based frameworks. Although the automatic segmentation results have reached the inter-annotator agreement, within a tolerance equal or higher than 20ms, the same is not true when a lower tolerance is considered. We propose a new set of cepstral features that derive from the time-frequency reassigned spectrogram and offer a sharper representation of the speech signal in the cepstral domain. The features are evaluated through a series of forced alignment experiments which demonstrate a better performance, compared to the traditional MFCC features, in aligning phone boundaries within a small distance from their true position.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cluster-based adaptation using density forest for HMM phone recognition.\n \n \n \n \n\n\n \n Abou-Zleikha, M.; Tan, Z.; Christensen, M. G.; and Jensen, S. H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2065-2069, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Cluster-basedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952753,\n  author = {M. Abou-Zleikha and Z. Tan and M. G. Christensen and S. H. Jensen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Cluster-based adaptation using density forest for HMM phone recognition},\n  year = {2014},\n  pages = {2065-2069},\n  abstract = {The dissimilarity between the training and test data in speech recognition systems is known to have a considerable effect on the recognition accuracy. To solve this problem, we use density forest to cluster the data and use maximum a posteriori (MAP) method to build a cluster-based adapted Gaussian mixture models (GMMs) in HMM speech recognition. Specifically, a set of bagged versions of the training data for each state in the HMM is generated, and each of these versions is used to generate one GMM and one tree in the density forest. Thereafter, an acoustic model forest is built by replacing the data of each leaf (cluster) in each tree with the corresponding GMM adapted by the leaf data using the MAP method. The results show that the proposed approach achieves 3:8% (absolute) lower phone error rate compared with the standard HMM/GMM and 0:8% (absolute) lower PER compared with bagged HMM/GMM.},\n  keywords = {Gaussian processes;hidden Markov models;maximum likelihood estimation;speech recognition;cluster-based adaptation;HMM phone recognition;speech recognition systems;density forest;maximum a posteriori method;MAP method;Gaussian mixture models;GMM;acoustic model forest;hidden Markov models;Hidden Markov models;Vegetation;Speech recognition;Data models;Speech;Acoustics;Adaptation models;ensemble acoustic modeling;density forest;cluster-based adaptation;HMM speech recognition},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925095.pdf},\n}\n\n
\n
\n\n\n
\n The dissimilarity between the training and test data in speech recognition systems is known to have a considerable effect on the recognition accuracy. To solve this problem, we use density forest to cluster the data and use maximum a posteriori (MAP) method to build a cluster-based adapted Gaussian mixture models (GMMs) in HMM speech recognition. Specifically, a set of bagged versions of the training data for each state in the HMM is generated, and each of these versions is used to generate one GMM and one tree in the density forest. Thereafter, an acoustic model forest is built by replacing the data of each leaf (cluster) in each tree with the corresponding GMM adapted by the leaf data using the MAP method. The results show that the proposed approach achieves 3:8% (absolute) lower phone error rate compared with the standard HMM/GMM and 0:8% (absolute) lower PER compared with bagged HMM/GMM.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Privacy-preserving speaker verification using garbled GMMS.\n \n \n \n \n\n\n \n Portêlo, J.; Raj, B.; Abad, A.; and Trancoso, I.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2070-2074, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Privacy-preservingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952754,\n  author = {J. Portêlo and B. Raj and A. Abad and I. Trancoso},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Privacy-preserving speaker verification using garbled GMMS},\n  year = {2014},\n  pages = {2070-2074},\n  abstract = {In this paper we present a privacy-preserving speaker verification system using a UBM-GMM technique. Remote speaker verification services rely on the system having access to the user's recordings, or features derived from them, and a model representing the user's voice. Preserving privacy in our context means that neither the system observes voice samples or speech models from the user nor the user observes the universal model owned by the system. Our approach uses Garbled Circuits for obtaining an implementation that simultaneously is secure, has high accuracy and is efficient. To the best of our knowledge this is the first privacy-preserving speaker verification system that accomplishes all these three goals.},\n  keywords = {Gaussian processes;mixture models;speaker recognition;privacy-preserving speaker verification system;garbled GMMS;UBM-GMM technique;remote speaker verification services;voice samples;speech models;Gaussian mixture models;garbled circuits;Adaptation models;Cryptography;Logic gates;Vectors;Feature extraction;Speech;Gaussian mixture model;Speaker Verification;Gaussian Mixture Models;Garbled Circuits;Data Privacy},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923445.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we present a privacy-preserving speaker verification system using a UBM-GMM technique. Remote speaker verification services rely on the system having access to the user's recordings, or features derived from them, and a model representing the user's voice. Preserving privacy in our context means that neither the system observes voice samples or speech models from the user nor the user observes the universal model owned by the system. Our approach uses Garbled Circuits for obtaining an implementation that simultaneously is secure, has high accuracy and is efficient. To the best of our knowledge this is the first privacy-preserving speaker verification system that accomplishes all these three goals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n FMRI unmixing via properly adjusted dictionary learning.\n \n \n \n \n\n\n \n Kopsinis, Y.; Georgiou, H.; and Theodoridis, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2075-2079, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"FMRIPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952755,\n  author = {Y. Kopsinis and H. Georgiou and S. Theodoridis},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {FMRI unmixing via properly adjusted dictionary learning},\n  year = {2014},\n  pages = {2075-2079},\n  abstract = {The mapping of the functional networks within the brain is a major step towards a deeper understanding of the the brain function. It involves the blind source separation of obtained fMRI data, usually performed via independent component analysis (ICA). Recently, there is an increased interest for alternatives to ICA for data-driven fMRI unmixing and notably good results have been attained with Dictionary Learning (DL) - based analysis. In this paper, the K-SVD DL method is appropriately adjusted in order to cope with the special properties characterizing the fMRI data.},\n  keywords = {biomedical MRI;blind source separation;brain;independent component analysis;learning (artificial intelligence);medical image processing;neurophysiology;singular value decomposition;functional networks;brain function;blind source separation;independent component analysis;ICA;data-driven fMRI unmixing;dictionary learning based analysis;DL based analysis;K-SVD DL method;Dictionaries;Vectors;Sparse matrices;Correlation;Encoding;Matching pursuit algorithms;Brain;Matrix Factorization;fMRI;Blind Source Separation;Dictionary Learning},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926909.pdf},\n}\n\n
\n
\n\n\n
\n The mapping of the functional networks within the brain is a major step towards a deeper understanding of the the brain function. It involves the blind source separation of obtained fMRI data, usually performed via independent component analysis (ICA). Recently, there is an increased interest for alternatives to ICA for data-driven fMRI unmixing and notably good results have been attained with Dictionary Learning (DL) - based analysis. In this paper, the K-SVD DL method is appropriately adjusted in order to cope with the special properties characterizing the fMRI data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Exploiting correlation in neural signals for data compression.\n \n \n \n \n\n\n \n Schmale, S.; Hoeffmann, J.; Knoop, B.; Kreiselmeyer, G.; Hamer, H.; Peters-Drolshagen, D.; and Paul, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2080-2084, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ExploitingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952756,\n  author = {S. Schmale and J. Hoeffmann and B. Knoop and G. Kreiselmeyer and H. Hamer and D. Peters-Drolshagen and S. Paul},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Exploiting correlation in neural signals for data compression},\n  year = {2014},\n  pages = {2080-2084},\n  abstract = {Progress in invasive brain research relies on signal acquisition at high temporal- and spatial resolutions, resulting in a data deluge at the (wireless) interface to the external world. Hence, data compression at the implant site is necessary in order to comply with the neurophysiological restrictions, especially when it comes to recording and transmission of neural raw data. This work investigates spatial correlations of neural signals, leading to a significant increase in data compression with a suitable sparse signal representation before the wireless data transmission at the implant site. Subsequently, we used the correlation-aware two-dimensional DCT used in image processing, to exploit spatial correlation of the data set. In order to guarantee a certain sparsity in the signal representation, two paradigms of zero forcing are evaluated and applied: Significant coefficients- and block sparsity-zero forcing.},\n  keywords = {brain;compressed sensing;data compression;discrete cosine transforms;image coding;image representation;medical image processing;neurophysiology;prosthetics;invasive brain research;signal acquisition;high temporal-resolution;high spatial resolution;data compression;implant site;neurophysiological restrictions;neural raw data;neural signals;sparse signal representation;wireless data transmission;correlation-aware two-dimensional DCT;image processing;coefficient-zero forcing;block sparsity-zero forcing;discrete cosine transform;Correlation;Abstracts;Electrodes;Neural Signals;Correlation;Data Compression;Compressed Sensing;Sparse Coding},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922747.pdf},\n}\n\n
\n
\n\n\n
\n Progress in invasive brain research relies on signal acquisition at high temporal- and spatial resolutions, resulting in a data deluge at the (wireless) interface to the external world. Hence, data compression at the implant site is necessary in order to comply with the neurophysiological restrictions, especially when it comes to recording and transmission of neural raw data. This work investigates spatial correlations of neural signals, leading to a significant increase in data compression with a suitable sparse signal representation before the wireless data transmission at the implant site. Subsequently, we used the correlation-aware two-dimensional DCT used in image processing, to exploit spatial correlation of the data set. In order to guarantee a certain sparsity in the signal representation, two paradigms of zero forcing are evaluated and applied: Significant coefficients- and block sparsity-zero forcing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cooperative use of parallel processing with time or frequency-domain filtering for shape recognition.\n \n \n \n \n\n\n \n Graca, C.; Falcao, G.; Kumar, S.; and Figueiredo, I. N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2085-2089, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CooperativePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952757,\n  author = {C. Graca and G. Falcao and S. Kumar and I. N. Figueiredo},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Cooperative use of parallel processing with time or frequency-domain filtering for shape recognition},\n  year = {2014},\n  pages = {2085-2089},\n  abstract = {For many computer vision applications, detection of blobs and/or tubular structures in images are of great importance. In this paper, we have developed a parallel signal processing framework for speeding up the detection of blob and tubular objects in images. We identified filtering procedure as being responsible for up to 98% of the global processing time, in the used blob or tubular detector functions. We show that after a certain dimension of the filter it is beneficial to combine frequency-domain techniques with parallel processing to develop faster signal processing algorithms. The proposed framework is applied to medical wireless capsule endoscopy (WCE) images, where blob and/or tubular detectors are useful in distinguishing between abnormal and normal images.},\n  keywords = {computer vision;endoscopes;filtering theory;frequency-domain analysis;medical image processing;time-domain analysis;frequency-domain filtering;time-domain filtering;shape recognition;computer vision applications;parallel signal processing framework;tubular objects;blob objects;identified filtering procedure;medical wireless capsule endoscopy images;WCE images;Graphics processing units;Frequency-domain analysis;Detectors;Filtering;Instruction sets;Biomedical imaging;Time-domain analysis;Object shape recognition;Convolution;Frequency-domain filtering;Parallel processing;Wireless capsule endoscopy},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924255.pdf},\n}\n\n
\n
\n\n\n
\n For many computer vision applications, detection of blobs and/or tubular structures in images are of great importance. In this paper, we have developed a parallel signal processing framework for speeding up the detection of blob and tubular objects in images. We identified filtering procedure as being responsible for up to 98% of the global processing time, in the used blob or tubular detector functions. We show that after a certain dimension of the filter it is beneficial to combine frequency-domain techniques with parallel processing to develop faster signal processing algorithms. The proposed framework is applied to medical wireless capsule endoscopy (WCE) images, where blob and/or tubular detectors are useful in distinguishing between abnormal and normal images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automated detection of sleep apnea and hypopnea events based on robust airflow envelope tracking.\n \n \n \n \n\n\n \n Ciolek, M.; Niedźwiecki, M.; Sieklicki, S.; Drozdowski, J.; and Siebert, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2090-2094, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AutomatedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952758,\n  author = {M. Ciolek and M. Niedźwiecki and S. Sieklicki and J. Drozdowski and J. Siebert},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Automated detection of sleep apnea and hypopnea events based on robust airflow envelope tracking},\n  year = {2014},\n  pages = {2090-2094},\n  abstract = {The paper presents a new approach to detection of apnea/hypopnea events, in the presence of artifacts and breathing irregularities, from a single-channel airflow record. The proposed algorithm identifies segments of signal affected by a high amplitude modulation corresponding to apnea/hypopnea events. It is shown that a robust airflow envelope-free of breathing artifacts-improves effectiveness of the diagnostic process and allows one to localize the beginning and the end of each episode more accurately. The performance of the approach, evaluated on 30 overnight polysomnographic recordings, was assessed using diagnostic measures such as accuracy, sensitivity, specificity, and Cohen's coefficient of agreement; achieving 95%, 90%, 96%, and 0.82, respectively.},\n  keywords = {amplitude modulation;medical signal detection;patient diagnosis;pneumodynamics;sleep;automated detection;sleep apnea event detection;hypopnea event detection;robust airflow envelope tracking;breathing irregularity;single-channel airflow record;high amplitude modulation;breathing artifacts;diagnostic process;polysomnographic recording;Sleep apnea;Transforms;Robustness;Envelope detectors;Finite impulse response filters;Medical diagnostic imaging;Apnea;envelope detectors;median filters},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924979.pdf},\n}\n\n
\n
\n\n\n
\n The paper presents a new approach to detection of apnea/hypopnea events, in the presence of artifacts and breathing irregularities, from a single-channel airflow record. The proposed algorithm identifies segments of signal affected by a high amplitude modulation corresponding to apnea/hypopnea events. It is shown that a robust airflow envelope-free of breathing artifacts-improves effectiveness of the diagnostic process and allows one to localize the beginning and the end of each episode more accurately. The performance of the approach, evaluated on 30 overnight polysomnographic recordings, was assessed using diagnostic measures such as accuracy, sensitivity, specificity, and Cohen's coefficient of agreement; achieving 95%, 90%, 96%, and 0.82, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian spatiotemporal segmentation of combined PET-CT data using a bivariate poisson mixture model.\n \n \n \n \n\n\n \n Irace, Z.; and Batatia, H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2095-2099, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952759,\n  author = {Z. Irace and H. Batatia},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian spatiotemporal segmentation of combined PET-CT data using a bivariate poisson mixture model},\n  year = {2014},\n  pages = {2095-2099},\n  abstract = {This paper presents an unsupervised algorithm for the joint segmentation of 4-D PET-CT images. The proposed method is based on a bivariate-Poisson mixture model to represent the bimodal data. A Bayesian framework is developed to label the voxels as well as jointly estimate the parameters of the mixture model. A generalized four-dimensional Potts-Markov Random Field (MRF) has been incorporated into the method to represent the spatio-temporal coherence of the mixture components. The method is successfully applied to 4-D registered PET-CT data of a patient with lung cancer. Results show that the proposed model fits accurately the data and allows the segmentation of different tissues and the identification of tumors in temporal series.},\n  keywords = {Bayes methods;cancer;computerised tomography;image representation;image segmentation;lung;Markov processes;medical image processing;mixture models;positron emission tomography;spatiotemporal phenomena;tumours;Bayesian spatiotemporal segmentation;combined PET-CT data;unsupervised algorithm;joint 4D PET-CT image segmentation;bivariate-Poisson mixture model;bimodal data representation;voxel labelling;joint parameter estimation;generalized four-dimensional Potts-Markov random field;MRF;spatio-temporal coherence representation;4D registered PET-CT data;lung cancer patient;tissue segmentation;tumor identification;temporal series;Positron emission tomography;Computed tomography;Image segmentation;Data models;Tumors;Bayes methods;Lungs;multimodality;data fusion;4-D segmentation;PET-CT;bivariate Poisson distribution},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926967.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents an unsupervised algorithm for the joint segmentation of 4-D PET-CT images. The proposed method is based on a bivariate-Poisson mixture model to represent the bimodal data. A Bayesian framework is developed to label the voxels as well as jointly estimate the parameters of the mixture model. A generalized four-dimensional Potts-Markov Random Field (MRF) has been incorporated into the method to represent the spatio-temporal coherence of the mixture components. The method is successfully applied to 4-D registered PET-CT data of a patient with lung cancer. Results show that the proposed model fits accurately the data and allows the segmentation of different tissues and the identification of tumors in temporal series.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Time-frequency kernel design for sparse joint-variable signal representations.\n \n \n \n \n\n\n \n Jokanovic, B.; Amin, M. G.; Zhang, Y. D.; and Ahmad, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2100-2104, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Time-frequencyPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952760,\n  author = {B. Jokanovic and M. G. Amin and Y. D. Zhang and F. Ahmad},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Time-frequency kernel design for sparse joint-variable signal representations},\n  year = {2014},\n  pages = {2100-2104},\n  abstract = {Highly localized quadratic time-frequency distributions cast nonstationary signals as sparse in the joint-variable representations. The linear model relating the ambiguity domain and time-frequency domain permits the application of sparse signal reconstruction techniques to yield high-resolution time-frequency representations. In this paper, we design signal-dependent kernels that enable the resulting time-frequency distribution to meet the two objectives of reduced cross-term interference and increased sparsity. It is shown that, for random undersampling schemes, the new adaptive kernel is superior to traditional reduced interference distribution kernels.},\n  keywords = {interference (signal);signal representation;time-frequency analysis;time-frequency kernel design;sparse joint-variable signal representations;highly localized quadratic time-frequency distributions;nonstationary signals;time-frequency domain;ambiguity domain;signal-dependent kernels;cross-term interference;Kernel;Time-frequency analysis;Interference;Compressed sensing;Signal representation;Optimization;Linear programming;Kernel design;reduced interference distribution;sparse representation;time-frequency analysis},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923333.pdf},\n}\n\n
\n
\n\n\n
\n Highly localized quadratic time-frequency distributions cast nonstationary signals as sparse in the joint-variable representations. The linear model relating the ambiguity domain and time-frequency domain permits the application of sparse signal reconstruction techniques to yield high-resolution time-frequency representations. In this paper, we design signal-dependent kernels that enable the resulting time-frequency distribution to meet the two objectives of reduced cross-term interference and increased sparsity. It is shown that, for random undersampling schemes, the new adaptive kernel is superior to traditional reduced interference distribution kernels.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Autoregressive models with epsilon-skew-normal innovations.\n \n \n \n \n\n\n \n Bondon, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2105-2109, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AutoregressivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952761,\n  author = {P. Bondon},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Autoregressive models with epsilon-skew-normal innovations},\n  year = {2014},\n  pages = {2105-2109},\n  abstract = {We consider the problem of modelling asymmetric near-Gaussian correlated signals by autoregressive models with epsilon-skew normal innovations. Moments and maximum likelihood estimators of the parameters are proposed and their limit distributions are derived. Monte Carlo simulation results are analyzed and the model is fitted to a real time series.},\n  keywords = {autoregressive moving average processes;maximum likelihood estimation;Monte Carlo methods;autoregressive models;epsilon-skew-normal innovations;asymmetric near-Gaussian correlated signals;maximum likelihood estimators;Monte Carlo simulation;Technological innovation;Maximum likelihood estimation;Data models;Time series analysis;Covariance matrices;Random variables;Mathematical model;Non-Gaussian;skewness;autoregressive model;maximum likelihood estimation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926201.pdf},\n}\n\n
\n
\n\n\n
\n We consider the problem of modelling asymmetric near-Gaussian correlated signals by autoregressive models with epsilon-skew normal innovations. Moments and maximum likelihood estimators of the parameters are proposed and their limit distributions are derived. Monte Carlo simulation results are analyzed and the model is fitted to a real time series.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Parametric estimation of multi-line parameters based on slide algorithm.\n \n \n \n \n\n\n \n Djukanović, S.; Simeunović, M.; and Djurović, I.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2110-2114, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ParametricPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952762,\n  author = {S. Djukanović and M. Simeunović and I. Djurović},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Parametric estimation of multi-line parameters based on slide algorithm},\n  year = {2014},\n  pages = {2110-2114},\n  abstract = {The subspace-based line detection (SLIDE) algorithm enables the estimation of parameters of multiple lines within a digital image by mapping these lines to frequency modulated (FM) signals. In this paper, we consider the estimation of such obtained FM signals by using estimators developed for polynomial-phase signals (PPSs). For this purpose, a recently proposed method that combines the cubic phase function (CPF) and high-order ambiguity function (HAF), referred to as the product CPF-HAF (PCPF-HAF), has been used. Simulations show that the PCPF-HAF-based estimator is more accurate than the estimators based on time-frequency representations.},\n  keywords = {image processing;parameter estimation;polynomial approximation;time frequency representations;polynomial phase signals;frequency modulated signals;digital image;subspace based line detection;slide algorithm;multi line parameters;parametric estimation;Estimation;Frequency modulation;Polynomials;Frequency estimation;Time-frequency analysis;Transforms;Accuracy;SLIDE;line estimation;polynomial-phase signal;PHAF;PCPF-HAF},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926215.pdf},\n}\n\n
\n
\n\n\n
\n The subspace-based line detection (SLIDE) algorithm enables the estimation of parameters of multiple lines within a digital image by mapping these lines to frequency modulated (FM) signals. In this paper, we consider the estimation of such obtained FM signals by using estimators developed for polynomial-phase signals (PPSs). For this purpose, a recently proposed method that combines the cubic phase function (CPF) and high-order ambiguity function (HAF), referred to as the product CPF-HAF (PCPF-HAF), has been used. Simulations show that the PCPF-HAF-based estimator is more accurate than the estimators based on time-frequency representations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-lag phase space representations for transient signals characterization.\n \n \n \n \n\n\n \n Bernard, C.; Petrut, T.; Vasile, G.; and Ioana, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2115-2119, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-lagPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952763,\n  author = {C. Bernard and T. Petrut and G. Vasile and C. Ioana},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-lag phase space representations for transient signals characterization},\n  year = {2014},\n  pages = {2115-2119},\n  abstract = {Transient signals are very difficult to characterize due to their short duration and their wide frequency content. Various methods such as spectrogram and wavelet decomposition have already been extensively used in the literature to detect them, but show limits when it comes to near similar transients discrimination. In this paper, we propose the multi-lag phase space analysis as a way to characterize them. This data-driven method enables the comparison between features extracted from two different signals. In an example, we compare the multi-lag phase space representations of three similar transients and show that common features can be found to discriminate them. Finally the results are compared with a wavelet decomposition.},\n  keywords = {feature extraction;phase space methods;signal detection;time-frequency analysis;multilag phase space representations;transient signals characterization;spectrogram;wavelet decomposition;data driven method;feature extraction;Transient analysis;Trajectory;Feature extraction;Time-frequency analysis;Signal processing;Delay effects;Market research;Phase space representation;Transients;Recurrence},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921135.pdf},\n}\n\n
\n
\n\n\n
\n Transient signals are very difficult to characterize due to their short duration and their wide frequency content. Various methods such as spectrogram and wavelet decomposition have already been extensively used in the literature to detect them, but show limits when it comes to near similar transients discrimination. In this paper, we propose the multi-lag phase space analysis as a way to characterize them. This data-driven method enables the comparison between features extracted from two different signals. In an example, we compare the multi-lag phase space representations of three similar transients and show that common features can be found to discriminate them. Finally the results are compared with a wavelet decomposition.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimation of large toeplitz covariance matrices and application to source detection.\n \n \n \n \n\n\n \n Vinogradova, J.; Couillet, R.; and Hachem, W.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2120-2124, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EstimationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952764,\n  author = {J. Vinogradova and R. Couillet and W. Hachem},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Estimation of large toeplitz covariance matrices and application to source detection},\n  year = {2014},\n  pages = {2120-2124},\n  abstract = {In this paper, performance results of two types of Toeplitz covariance matrix estimators are provided. Concentration inequalities for the spectral norm for both estimators have been derived showing exponential convergence of the error to zero. It is shown that the same rates of convergence are obtained in the case where the aggregated matrix of time samples is corrupted by a rank one matrix. As an application based on this model, source detection by a large dimensional sensor array with temporally correlated noise is studied.},\n  keywords = {array signal processing;covariance matrices;Toeplitz covariance matrix estimators;concentration inequalities;large dimensional sensor array;correlated noise;source detection;Abstracts;Gold;Toeplitz covariance matrix;concentration inequalities;correlated noise;source detection},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922371.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, performance results of two types of Toeplitz covariance matrix estimators are provided. Concentration inequalities for the spectral norm for both estimators have been derived showing exponential convergence of the error to zero. It is shown that the same rates of convergence are obtained in the case where the aggregated matrix of time samples is corrupted by a rank one matrix. As an application based on this model, source detection by a large dimensional sensor array with temporally correlated noise is studied.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A no-reference audio-visual video quality metric.\n \n \n \n \n\n\n \n Martinez, H. B.; and Farias, M. C. Q.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2125-2129, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952765,\n  author = {H. B. Martinez and M. C. Q. Farias},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A no-reference audio-visual video quality metric},\n  year = {2014},\n  pages = {2125-2129},\n  abstract = {Three psychophysical experiments were carried out to understand both audio and video components interact and affect the overall audio-visual quality. In the experiments, subjects independently evaluated the perceived quality of (1) video (without audio), (2) audio (without video ), and (3) video with audio. With the help of the perceptual models obtained using subjective data, we propose 3 no-reference audio-visual quality metrics composed of combination functions of a video and an audio quality metrics. The no-reference video quality metric consists of a blockiness and a blurriness metrics, while the NR audio metric is modification of the SESQA metric. When tested on our database and on a public database, the metrics performed better than single video NR and RF metrics available in the literature.},\n  keywords = {audio-visual systems;video signal processing;no-reference audio-visual video quality metric;blurriness metrics;blockiness metric;NR audio metric;SESQA metric;Quality assessment;Video recording;Bit rate;Correlation;Speech;Databases;video quality assessment;quality metrics;audio-visual},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925073.pdf},\n}\n\n
\n
\n\n\n
\n Three psychophysical experiments were carried out to understand both audio and video components interact and affect the overall audio-visual quality. In the experiments, subjects independently evaluated the perceived quality of (1) video (without audio), (2) audio (without video ), and (3) video with audio. With the help of the perceptual models obtained using subjective data, we propose 3 no-reference audio-visual quality metrics composed of combination functions of a video and an audio quality metrics. The no-reference video quality metric consists of a blockiness and a blurriness metrics, while the NR audio metric is modification of the SESQA metric. When tested on our database and on a public database, the metrics performed better than single video NR and RF metrics available in the literature.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robustness and prediction accuracy of Machine Learning for objective visual quality assessment.\n \n \n \n \n\n\n \n Hines, A.; Kendrick, P.; Barri, A.; Narwaria, M.; and Redi, J. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2130-2134, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RobustnessPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952766,\n  author = {A. Hines and P. Kendrick and A. Barri and M. Narwaria and J. A. Redi},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Robustness and prediction accuracy of Machine Learning for objective visual quality assessment},\n  year = {2014},\n  pages = {2130-2134},\n  abstract = {Machine Learning (ML) is a powerful tool to support the development of objective visual quality assessment metrics, serving as a substitute model for the perceptual mechanisms acting in visual quality appreciation. Nevertheless, the reliability of ML-based techniques within objective quality assessment metrics is often questioned. In this study, the robustness of ML in supporting objective quality assessment is investigated, specifically when the feature set adopted for prediction is suboptimal. A Principal Component Regression based algorithm and a Feed Forward Neural Network are compared when pooling the Structural Similarity Index (SSIM) features perturbed with noise. The neural network adapts better with noise and intrinsically favours features according to their salient content.},\n  keywords = {feedforward neural nets;image processing;learning (artificial intelligence);principal component analysis;regression analysis;objective visual quality assessment metrics;substitute model;perceptual mechanisms;visual quality appreciation;ML-based techniques;feature set;principal component regression based algorithm;feed forward neural network;structural similarity index features;SSIM features;salient content;prediction accuracy;Noise;Sensitivity;Quality assessment;Image quality;Noise level;Noise measurement;image quality assessment;SSIM;neural networks;machine learning},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923531.pdf},\n}\n\n
\n
\n\n\n
\n Machine Learning (ML) is a powerful tool to support the development of objective visual quality assessment metrics, serving as a substitute model for the perceptual mechanisms acting in visual quality appreciation. Nevertheless, the reliability of ML-based techniques within objective quality assessment metrics is often questioned. In this study, the robustness of ML in supporting objective quality assessment is investigated, specifically when the feature set adopted for prediction is suboptimal. A Principal Component Regression based algorithm and a Feed Forward Neural Network are compared when pooling the Structural Similarity Index (SSIM) features perturbed with noise. The neural network adapts better with noise and intrinsically favours features according to their salient content.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n EEG correlates during video quality perception.\n \n \n \n \n\n\n \n Kroupi, E.; Hanhart, P.; Lee, J.; Rerabek, M.; and Ebrahimi, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2135-2139, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EEGPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952767,\n  author = {E. Kroupi and P. Hanhart and J. Lee and M. Rerabek and T. Ebrahimi},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {EEG correlates during video quality perception},\n  year = {2014},\n  pages = {2135-2139},\n  abstract = {Understanding Quality of Experience (QoE) in various multimedia contents is still challenging. In this paper, we investigate the way QoE affects brain oscillations captured by electroencephalography (EEG). In particular, sixteen subjects watched 2D and 3D videos of various quality levels while their EEG signals were recorded, and were asked to provide their self-assessed perceived quality ratings for each video. EEG signals were decomposed into six frequency bands, namely theta, alpha, beta low, beta middle, beta high and gamma bands. The results revealed frontal asymmetry patterns in the alpha band, which correspond to right frontal activation when perceived quality is low. This finding implies that perceived high quality may be related to positive emotional processes.},\n  keywords = {electroencephalography;medical image processing;quality of service;video signal processing;EEG signal correlation;video quality perception;quality of experience;QoE;multimedia contents;brain oscillations;electroencephalography;3D videos;watched 2D videos;self-assessed perceived quality ratings;gamma frequency bands;beta high frequency bands;theta frequency bands;alpha frequency bands;beta low frequency bands;beta middle frequency bands;frontal asymmetry patterns;right frontal activation;positive emotional processes;Electroencephalography;Three-dimensional displays;Video sequences;Rendering (computer graphics);Quality assessment;Feature extraction;Correlation;EEG;QoE;frontal asymmetry},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924931.pdf},\n}\n\n
\n
\n\n\n
\n Understanding Quality of Experience (QoE) in various multimedia contents is still challenging. In this paper, we investigate the way QoE affects brain oscillations captured by electroencephalography (EEG). In particular, sixteen subjects watched 2D and 3D videos of various quality levels while their EEG signals were recorded, and were asked to provide their self-assessed perceived quality ratings for each video. EEG signals were decomposed into six frequency bands, namely theta, alpha, beta low, beta middle, beta high and gamma bands. The results revealed frontal asymmetry patterns in the alpha band, which correspond to right frontal activation when perceived quality is low. This finding implies that perceived high quality may be related to positive emotional processes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Single exposure vs tone mapped High Dynamic Range images: A study based on quality of experience.\n \n \n \n \n\n\n \n Narwaria, M.; Da Silva, M. P.; Le Callet, P.; and Pepion, R.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2140-2144, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SinglePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952768,\n  author = {M. Narwaria and M. P. {Da Silva} and P. {Le Callet} and R. Pepion},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Single exposure vs tone mapped High Dynamic Range images: A study based on quality of experience},\n  year = {2014},\n  pages = {2140-2144},\n  abstract = {Tone mapping operators (TMOs), employed to fit the dynamic range of High Dynamic Range (HDR) visual signals to that of the display, are generally non-transparent and modify the visual appearance of the scene. Despite this, tone mapped content generally tends to have more visual details as compared to a single exposure scene. It is however not clear if the extra details in tone mapped HDR affect user preferences over a single exposure content in terms of scene appearance and to what extent. This paper aims to shed light on this issue via a comprehensive subjective study. Our results reveal that there is no statistical evidence to establish if the users preferred tone mapped content over the single exposure version as closer representation of the corresponding HDR scene. We present those results as well as outline the possible factors contributing to this somewhat unexpected finding.},\n  keywords = {cartography;image representation;quality of experience;single exposure;tone mapped high dynamic range image;quality of experience;tone mapping operators;high dynamic range visual signals;HDR scene;Observers;Visualization;Image color analysis;Dynamic range;Statistical analysis;Lighting;Context;Quality of Experience (QoE);High Dynamic Range (HDR);tone mapping},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924753.pdf},\n}\n\n
\n
\n\n\n
\n Tone mapping operators (TMOs), employed to fit the dynamic range of High Dynamic Range (HDR) visual signals to that of the display, are generally non-transparent and modify the visual appearance of the scene. Despite this, tone mapped content generally tends to have more visual details as compared to a single exposure scene. It is however not clear if the extra details in tone mapped HDR affect user preferences over a single exposure content in terms of scene appearance and to what extent. This paper aims to shed light on this issue via a comprehensive subjective study. Our results reveal that there is no statistical evidence to establish if the users preferred tone mapped content over the single exposure version as closer representation of the corresponding HDR scene. We present those results as well as outline the possible factors contributing to this somewhat unexpected finding.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Subjective evaluation of 3D video enhancement algorithm.\n \n \n \n \n\n\n \n Battisti, F.; Carli, M.; and Neri, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2145-2149, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SubjectivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952769,\n  author = {F. Battisti and M. Carli and A. Neri},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Subjective evaluation of 3D video enhancement algorithm},\n  year = {2014},\n  pages = {2145-2149},\n  abstract = {In this contribution the subjective evaluation of a 3D enhancement algorithm is presented. In the proposed scheme, perceptually significant features are enhanced or attenuated according to their saliency and to the masking effects induced by textured background. In particular, for each frame we consider the high frequency components, i.e., the edges, as relevant features in the edge complex wavelet domain computed by the first order dyadic Gauss-Laguerre Circular Harmonic Wavelet decomposition. The saliency is assessed by evaluating both disparity map and motion vectors extracted from the 3D videos. The effectiveness of the proposed approach has been verified by means of subjective tests.},\n  keywords = {image enhancement;video signal processing;wavelet transforms;subjective evaluation;3D video enhancement algorithm;edge complex wavelet domain;first order dyadic Gauss-Laguerre circular harmonic wavelet decomposition;disparity map;motion vectors;Three-dimensional displays;Bandwidth;Visualization;Optical imaging;Organizations;Image edge detection;Wavelet domain;Video enhancement;stereo;subjective quality;Laguerre Gauss},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921269.pdf},\n}\n\n
\n
\n\n\n
\n In this contribution the subjective evaluation of a 3D enhancement algorithm is presented. In the proposed scheme, perceptually significant features are enhanced or attenuated according to their saliency and to the masking effects induced by textured background. In particular, for each frame we consider the high frequency components, i.e., the edges, as relevant features in the edge complex wavelet domain computed by the first order dyadic Gauss-Laguerre Circular Harmonic Wavelet decomposition. The saliency is assessed by evaluating both disparity map and motion vectors extracted from the 3D videos. The effectiveness of the proposed approach has been verified by means of subjective tests.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-complexity linear precoding for multi-cell massive MIMO systems.\n \n \n \n \n\n\n \n Kammoun, A.; Müller, A.; Björnson, E.; and Debbah, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2150-2154, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Low-complexityPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952770,\n  author = {A. Kammoun and A. Müller and E. Björnson and M. Debbah},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Low-complexity linear precoding for multi-cell massive MIMO systems},\n  year = {2014},\n  pages = {2150-2154},\n  abstract = {Massive MIMO (multiple-input multiple-output) has been recognized as an efficient solution to improve the spectral efficiency of future communication systems. However, increasing the number of antennas and users goes hand-in-hand with increasing computational complexity. In particular, the precoding design becomes involved since near-optimal precoding, such as regularized-zero forcing (RZF), requires the inversion of a large matrix. In our previous work [1] we proposed to solve this issue in the single-cell case by approximating the matrix inverse by a truncated polynomial expansion (TPE), where the polynomial coefficients are selected for optimal system performance. In this paper, we generalize this technique to multi-cell scenarios. While the optimization of the RZF precoding has, thus far, not been feasible in multi-cell systems, we show that the proposed TPE precoding can be optimized to maximize the weighted max-min fairness. Using simulations, we compare the proposed TPE precoding with RZF and show that our scheme can achieve higher throughput using a TPE order of only 3.},\n  keywords = {computational complexity;linear codes;matrix inversion;MIMO communication;minimax techniques;precoding;low-complexity linear precoding;multicell massive MIMO systems;multiple-input multiple-output systems;spectral efficiency improvement;computational complexity;communication systems;near-optimal precoding;regularized-zero forcing;RZF;matrix inversion;single-cell case;truncated polynomial expansion;optimal system performance;weighted max-min fairness;TPE order;Interference;MIMO;Antennas;Polynomials;Covariance matrices;Signal to noise ratio;Optimization;Massive MIMO;linear precoding;low complexity;multi-cell systems;random matrix theory},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922419.pdf},\n}\n\n
\n
\n\n\n
\n Massive MIMO (multiple-input multiple-output) has been recognized as an efficient solution to improve the spectral efficiency of future communication systems. However, increasing the number of antennas and users goes hand-in-hand with increasing computational complexity. In particular, the precoding design becomes involved since near-optimal precoding, such as regularized-zero forcing (RZF), requires the inversion of a large matrix. In our previous work [1] we proposed to solve this issue in the single-cell case by approximating the matrix inverse by a truncated polynomial expansion (TPE), where the polynomial coefficients are selected for optimal system performance. In this paper, we generalize this technique to multi-cell scenarios. While the optimization of the RZF precoding has, thus far, not been feasible in multi-cell systems, we show that the proposed TPE precoding can be optimized to maximize the weighted max-min fairness. Using simulations, we compare the proposed TPE precoding with RZF and show that our scheme can achieve higher throughput using a TPE order of only 3.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust G-MUSIC.\n \n \n \n \n\n\n \n Couillet, R.; and Kammoun, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2155-2159, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952771,\n  author = {R. Couillet and A. Kammoun},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Robust G-MUSIC},\n  year = {2014},\n  pages = {2155-2159},\n  abstract = {An improved MUSIC algorithm for direction-of-arrival estimation is introduced that accounts both for large array sizes N comparatively with the number of independent observations n and for the impulsiveness of the background environment (e.g., presence of outliers in the observations). This method derives from the spiked G-MUSIC algorithm proposed in [1] and from the recent works by one of the authors on the random matrix analysis of robust scatter matrix estimators [2]. The method is shown to be asymptotically consistent where classical approaches are not. This superiority is corroborated by simulations.},\n  keywords = {array signal processing;direction-of-arrival estimation;signal classification;robust G-MUSIC algorithm;direction-of-arrival estimation;background environment;spiked G-MUSIC algorithm;random matrix analysis;robust scatter matrix estimators;Eigenvalues and eigenfunctions;Robustness;Multiple signal classification;Estimation;Covariance matrices;Noise;Arrays;Random matrix theory;MUSIC;robust estimation;elliptical distribution},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917271.pdf},\n}\n\n
\n
\n\n\n
\n An improved MUSIC algorithm for direction-of-arrival estimation is introduced that accounts both for large array sizes N comparatively with the number of independent observations n and for the impulsiveness of the background environment (e.g., presence of outliers in the observations). This method derives from the spiked G-MUSIC algorithm proposed in [1] and from the recent works by one of the authors on the random matrix analysis of robust scatter matrix estimators [2]. The method is shown to be asymptotically consistent where classical approaches are not. This superiority is corroborated by simulations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Asymptotic analysis of a GLRT for detection with large sensor arrays.\n \n \n \n \n\n\n \n Hiltunen, S.; Loubaton, P.; and Chevalier, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2160-2164, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AsymptoticPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952772,\n  author = {S. Hiltunen and P. Loubaton and P. Chevalier},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Asymptotic analysis of a GLRT for detection with large sensor arrays},\n  year = {2014},\n  pages = {2160-2164},\n  abstract = {This paper addresses the performance analysis of two GLRT receivers in the case where the number of sensors M is of the same order of magnitude as the sample size N. In the asymptotic regime where M and N converge towards ∞ at the same rate, the corresponding asymptotic means and variances are characterized using large random matrix theory, and compared to the standard situation where N → +∞ and M is fixed. This asymptotic analysis allows to understand the behavior of the considered receivers, even for relatively small values of N and M.},\n  keywords = {array signal processing;matrix algebra;signal detection;statistical testing;GLRT asymptotic analysis;large sensor arrays;GLRT receiver performance analysis;large random matrix theory;signal detection;generalized likelihood test;Standards;Gaussian distribution;Context;Synchronization;Noise;Training;Covariance matrices;Multichannel detection;asymptotic analysis;large random matrices},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925435.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the performance analysis of two GLRT receivers in the case where the number of sensors M is of the same order of magnitude as the sample size N. In the asymptotic regime where M and N converge towards ∞ at the same rate, the corresponding asymptotic means and variances are characterized using large random matrix theory, and compared to the standard situation where N → +∞ and M is fixed. This asymptotic analysis allows to understand the behavior of the considered receivers, even for relatively small values of N and M.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Correlation test for high dimensional data with application to signal detection in sensor networks.\n \n \n \n \n\n\n \n Mestre, X.; Vallet, P.; and Hachem, W.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2165-2169, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CorrelationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952793,\n  author = {X. Mestre and P. Vallet and W. Hachem},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Correlation test for high dimensional data with application to signal detection in sensor networks},\n  year = {2014},\n  pages = {2165-2169},\n  abstract = {The problem of correlation detection of multivariate Gaussian observations is considered. The problem is formulated as a binary hypothesis test, where the null hypothesis corresponds to a diagonal correlation matrix with possibly different diagonal entries, whereas the alternative would be associated to any other form of positive covariance. Using tools from random matrix theory, we study the asymptotic behavior of the Generalized Likelihood Ratio Test (GLRT) under both hypothesis, assuming that both the sample size and the observation dimension tend to infinity at the same rate. It is shown that the GLRT statistic always converges to a Gaussian distribution, although the asymptotic mean and variance will strongly depend the actual hypothesis. Numerical simulations demonstrate the superiority of the proposed asymptotic description in situations where the sample size is not much larger than the observation dimension.},\n  keywords = {correlation methods;covariance matrices;Gaussian distribution;signal detection;wireless sensor networks;correlation test;high dimensional data;signal detection;sensor networks;multivariate Gaussian observations;binary hypothesis test;null hypothesis;diagonal correlation matrix;diagonal entries;positive covariance;generalized likelihood ratio test;observation dimension;GLRT statistic;Gaussian distribution;asymptotic mean;asymptotic variance;numerical simulations;asymptotic description;Correlation;Approximation methods;Covariance matrices;Convergence;Eigenvalues and eigenfunctions;Vectors;Random variables;Hypothesis testing;correlation matrix;random matrix theory;central limit theorem},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925017.pdf},\n}\n\n
\n
\n\n\n
\n The problem of correlation detection of multivariate Gaussian observations is considered. The problem is formulated as a binary hypothesis test, where the null hypothesis corresponds to a diagonal correlation matrix with possibly different diagonal entries, whereas the alternative would be associated to any other form of positive covariance. Using tools from random matrix theory, we study the asymptotic behavior of the Generalized Likelihood Ratio Test (GLRT) under both hypothesis, assuming that both the sample size and the observation dimension tend to infinity at the same rate. It is shown that the GLRT statistic always converges to a Gaussian distribution, although the asymptotic mean and variance will strongly depend the actual hypothesis. Numerical simulations demonstrate the superiority of the proposed asymptotic description in situations where the sample size is not much larger than the observation dimension.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fluctuations for linear spectral statistics of large random covariance matrices.\n \n \n \n \n\n\n \n Najim, J.; and Yao, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2170-2174, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"FluctuationsPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952794,\n  author = {J. Najim and J. Yao},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Fluctuations for linear spectral statistics of large random covariance matrices},\n  year = {2014},\n  pages = {2170-2174},\n  abstract = {The theory of large random matrices has proved to be an efficient tool to address many problems in wireless communication and statistical signal processing these last two decades. We provide hereafter a central limit theorem (CLT) for linear spectral statistics of large random covariance matrices, improving Bai and Silverstein's celebrated 2004 result. This fluctuation result should be of interest to study the fluctuations of important estimators in statistical signal processing.},\n  keywords = {covariance matrices;statistical analysis;linear spectral statistics fluctation;large random covariance matrices;wireless communication;statistical signal processing;central limit theorem;CLT;Covariance matrices;Eigenvalues and eigenfunctions;Limiting;Convergence;Random variables;Transforms;Large random matrices fluctuations},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925019.pdf},\n}\n\n
\n
\n\n\n
\n The theory of large random matrices has proved to be an efficient tool to address many problems in wireless communication and statistical signal processing these last two decades. We provide hereafter a central limit theorem (CLT) for linear spectral statistics of large random covariance matrices, improving Bai and Silverstein's celebrated 2004 result. This fluctuation result should be of interest to study the fluctuations of important estimators in statistical signal processing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Segmentation of 3D dynamic meshes based on Reeb graph approach.\n \n \n \n \n\n\n \n Hachani, M.; Zaid, A. O.; and Puech, W.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2175-2179, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SegmentationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952795,\n  author = {M. Hachani and A. O. Zaid and W. Puech},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Segmentation of 3D dynamic meshes based on Reeb graph approach},\n  year = {2014},\n  pages = {2175-2179},\n  abstract = {This paper presents a new segmentation approach, for 3D dynamic meshes, based upon ideas from Morse theory and Reeb graphs. The segmentation process is performed using topological analysis of smooth functions defined on 3D mesh surface. The main idea is to detect critical nodes located on the mobile and immobile parts. Particularly, we define a new continuous scalar function, used for Reeb graph construction. This function is based on the heat diffusion properties. Clusters are obtained according to the values of scalar function while adding a refinement step. The latter is based on curvature information in order to adjust segmentation boundaries. Experimental results performed on 3D dynamic articulated meshes demonstrate the high accuracy and stability under topology changes and various perturbations through time.},\n  keywords = {computational geometry;computer graphics;diffusion;image segmentation;pattern clustering;Morse theory;segmentation process;smooth function topological analysis;3D mesh surface;continuous scalar function;Reeb graph construction;heat diffusion properties;clustering;refinement step;curvature information;3D dynamic articulated meshes;Three-dimensional displays;Shape;Motion segmentation;Heating;Feature extraction;Topology;Geometry;3D dynamic meshes;segmentation;Reeb graph;heat diffusion;curvature information},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917737.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a new segmentation approach, for 3D dynamic meshes, based upon ideas from Morse theory and Reeb graphs. The segmentation process is performed using topological analysis of smooth functions defined on 3D mesh surface. The main idea is to detect critical nodes located on the mobile and immobile parts. Particularly, we define a new continuous scalar function, used for Reeb graph construction. This function is based on the heat diffusion properties. Clusters are obtained according to the values of scalar function while adding a refinement step. The latter is based on curvature information in order to adjust segmentation boundaries. Experimental results performed on 3D dynamic articulated meshes demonstrate the high accuracy and stability under topology changes and various perturbations through time.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Human detection and tracking through temporal feature recognition.\n \n \n \n \n\n\n \n Coutts, F. K.; Marshall, S.; and Murray, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2180-2184, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"HumanPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952796,\n  author = {F. K. Coutts and S. Marshall and P. Murray},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Human detection and tracking through temporal feature recognition},\n  year = {2014},\n  pages = {2180-2184},\n  abstract = {The ability to accurately track objects of interest - particularly humans - is of great importance in the fields of security and surveillance. In such scenarios, the application of accurate, automated human tracking offers benefits over manual supervision. In this paper, recent efforts made to investigate the improvement of automated human detection and tracking techniques through the recognition of person-specific time-varying signatures in thermal video are detailed. A robust human detection algorithm is developed to aid the initialisation stage of a state-of-the-art existing tracking algorithm. In addition, coupled with the spatial tracking methods present in this algorithm, the inclusion of temporal signature recognition in the tracking process is shown to improve human tracking results.},\n  keywords = {feature extraction;object detection;object tracking;video signal processing;human detection;human tracking;temporal feature recognition;person specific time varying signatures;thermal video;spatial tracking methods;temporal signature recognition;Target tracking;Feature extraction;Algorithm design and analysis;Support vector machines;Principal component analysis;Robustness;Automated human tracking;thermal video;temporal characteristic recognition},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569907081.pdf},\n}\n\n
\n
\n\n\n
\n The ability to accurately track objects of interest - particularly humans - is of great importance in the fields of security and surveillance. In such scenarios, the application of accurate, automated human tracking offers benefits over manual supervision. In this paper, recent efforts made to investigate the improvement of automated human detection and tracking techniques through the recognition of person-specific time-varying signatures in thermal video are detailed. A robust human detection algorithm is developed to aid the initialisation stage of a state-of-the-art existing tracking algorithm. In addition, coupled with the spatial tracking methods present in this algorithm, the inclusion of temporal signature recognition in the tracking process is shown to improve human tracking results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimation of the weight parameter with SAEM for marked point processes applied to object detection.\n \n \n \n \n\n\n \n Boisbunon, A.; and Zerubia, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2185-2189, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"EstimationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952797,\n  author = {A. Boisbunon and J. Zerubia},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Estimation of the weight parameter with SAEM for marked point processes applied to object detection},\n  year = {2014},\n  pages = {2185-2189},\n  abstract = {We consider the problem of estimating one of the parameters of a marked point process, namely the tradeoff parameter between the data and prior energy terms defining the probability density of the process. In previous work, the Stochastic Expectation-Maximization (SEM) algorithm was used. However, SEM is well known for having bad convergence properties, which might also slow down the estimation time. Therefore, in this work, we consider an alternative to SEM: the Stochastic Approximation EM algorithm, which makes an efficient use of all the data simulated. We compare both approaches on high resolution satellite images where the objective is to detect boats in a harbor.},\n  keywords = {image resolution;object detection;stochastic processes;weight parameter estimation;SAEM;object detection;marked point process;probability density;stochastic expectation-maximization algorithm;SEM algorithm;stochastic approximation EM algorithm;high resolution satellite images;boat detection;Abstracts;Image processing;object detection;marked point process;Stochastic EM;Stochastic Approximation EM},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569919425.pdf},\n}\n\n
\n
\n\n\n
\n We consider the problem of estimating one of the parameters of a marked point process, namely the tradeoff parameter between the data and prior energy terms defining the probability density of the process. In previous work, the Stochastic Expectation-Maximization (SEM) algorithm was used. However, SEM is well known for having bad convergence properties, which might also slow down the estimation time. Therefore, in this work, we consider an alternative to SEM: the Stochastic Approximation EM algorithm, which makes an efficient use of all the data simulated. We compare both approaches on high resolution satellite images where the objective is to detect boats in a harbor.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint road network extraction from a set of high resolution satellite images.\n \n \n \n \n\n\n \n Besbes, O.; and Benazza-Benyahia, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2190-2194, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952798,\n  author = {O. Besbes and A. Benazza-Benyahia},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Joint road network extraction from a set of high resolution satellite images},\n  year = {2014},\n  pages = {2190-2194},\n  abstract = {In this paper, we develop a novel Conditional Random Field (CRF) formulation to jointly extract road networks from a set of high resolution satellite images. Our fully unsupervised method relies on a pairwise CRF model defined over a set of test images, which encodes prior assumptions about the roads such as thinness, elongation. Four competitive energy terms related to color, shape, symmetry and contrast-sensitive potentials are suitably defined to tackle with the challenging problem of road network extraction. The resulting objective energy is minimized by resorting to graph-cuts tools. Promising results are obtained for developed suburban scenes in remotely sensed images. The proposed model improve significantly the segmentation quality, compared against the independent CRF and two state-of-the-art methods.},\n  keywords = {geophysical image processing;graph theory;image resolution;road traffic;joint road network extraction;high resolution satellite images;novel conditional random field;CRF formulation;road network extraction;satellite image resolution;unsupervised method;objective energy;graph cuts tools;Roads;Image segmentation;Joints;Satellites;Image color analysis;Shape;Feature extraction;Road network;joint segmentation;CRF},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925469.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we develop a novel Conditional Random Field (CRF) formulation to jointly extract road networks from a set of high resolution satellite images. Our fully unsupervised method relies on a pairwise CRF model defined over a set of test images, which encodes prior assumptions about the roads such as thinness, elongation. Four competitive energy terms related to color, shape, symmetry and contrast-sensitive potentials are suitably defined to tackle with the challenging problem of road network extraction. The resulting objective energy is minimized by resorting to graph-cuts tools. Promising results are obtained for developed suburban scenes in remotely sensed images. The proposed model improve significantly the segmentation quality, compared against the independent CRF and two state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic design of aperture filters using neural networks applied to ocular image segmentation.\n \n \n \n \n\n\n \n Benalcázar, M. E.; Brun, M.; and Ballarin, V. L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2195-2199, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952799,\n  author = {M. E. Benalcázar and M. Brun and V. L. Ballarin},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic design of aperture filters using neural networks applied to ocular image segmentation},\n  year = {2014},\n  pages = {2195-2199},\n  abstract = {Aperture filters are image operators which combine mathematical morphology and pattern recognition theory to design windowed classifiers. Previous works propose designing and representing such operators using large decision tables and classic linear pattern classifiers. These approaches demand an enormous computational cost in order to solve real image problems. The current work presents a new method to automatically design Aperture filters for color and grayscale image processing. This approach consists of designing a family of Aperture filters using artificial feed-forward neural networks. The resulting Aperture filters are combined into a single one using an ensemble method. The performance of the proposed approach was evaluated by segmenting blood vessels in ocular images of the DRIVE database. The results show the suitability of this approach: It outperforms window operators designed using neural networks and logistic regression as well as Aperture filters designed using logistic regression and support vector machines.},\n  keywords = {feedforward neural nets;filtering theory;image classification;image colour analysis;image segmentation;mathematical morphology;aperture filter automatic design;ocular image segmentation;mathematical morphology;support vector machines;logistic regression;DRIVE database;blood vessel segmentation;ensemble method;artificial feedforward neural networks;color image processing;grayscale image processing;linear pattern classifiers;large decision tables;windowed classifier design;pattern recognition theory;Apertures;Artificial neural networks;Image segmentation;Training;Gray-scale;Biomedical imaging;Blood vessels;Image processing;pattern recognition;mathematical morphology;neural networks;Aperture filters;ensemble of classifiers},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926325.pdf},\n}\n\n
\n
\n\n\n
\n Aperture filters are image operators which combine mathematical morphology and pattern recognition theory to design windowed classifiers. Previous works propose designing and representing such operators using large decision tables and classic linear pattern classifiers. These approaches demand an enormous computational cost in order to solve real image problems. The current work presents a new method to automatically design Aperture filters for color and grayscale image processing. This approach consists of designing a family of Aperture filters using artificial feed-forward neural networks. The resulting Aperture filters are combined into a single one using an ensemble method. The performance of the proposed approach was evaluated by segmenting blood vessels in ocular images of the DRIVE database. The results show the suitability of this approach: It outperforms window operators designed using neural networks and logistic regression as well as Aperture filters designed using logistic regression and support vector machines.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Accelerated A-contrario detection of smooth trajectories.\n \n \n \n \n\n\n \n Abergel, R.; and Moisan, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2200-2204, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AcceleratedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952800,\n  author = {R. Abergel and L. Moisan},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Accelerated A-contrario detection of smooth trajectories},\n  year = {2014},\n  pages = {2200-2204},\n  abstract = {The detection of smooth trajectories in a (noisy) point set sequence can be realized optimally with the ASTRE (A-contrario Smooth TRajectory Extraction) algorithm, but the quadratic time and memory complexity of this algorithm with respect to the number of frames is prohibitive for many practical applications. We here propose a variant that cuts the input sequence into overlapping temporal chunks that are processed in a sequential (but non-independent) way, which results in a linear complexity with respect to the number of frames. Surprisingly, the performances are not affected by this acceleration strategy, and are in general even slightly above those of the original ASTRE algorithm.},\n  keywords = {computational complexity;feature extraction;image motion analysis;image sequences;object detection;smoothing methods;accelerated A-contrario detection;smooth trajectory detection;noisy point set sequence;A-contrario smooth trajectory extraction;memory complexity;quadratic time complexity;overlapping temporal chunks;linear complexity;ASTRE algorithm;input sequence;image processing;image sequences;motion detection;Trajectory;Complexity theory;Acceleration;Bismuth;Noise;Image sequences;Algorithm design and analysis;trajectory analysis;point tracking;motion detection;a-contrario model},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926799.pdf},\n}\n\n
\n
\n\n\n
\n The detection of smooth trajectories in a (noisy) point set sequence can be realized optimally with the ASTRE (A-contrario Smooth TRajectory Extraction) algorithm, but the quadratic time and memory complexity of this algorithm with respect to the number of frames is prohibitive for many practical applications. We here propose a variant that cuts the input sequence into overlapping temporal chunks that are processed in a sequential (but non-independent) way, which results in a linear complexity with respect to the number of frames. Surprisingly, the performances are not affected by this acceleration strategy, and are in general even slightly above those of the original ASTRE algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Human action recognition in 3D motion sequences.\n \n \n \n \n\n\n \n Kelgeorgiadis, K.; and Nikolaidis, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2205-2209, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"HumanPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952801,\n  author = {K. Kelgeorgiadis and N. Nikolaidis},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Human action recognition in 3D motion sequences},\n  year = {2014},\n  pages = {2205-2209},\n  abstract = {In this paper we propose a method for learning and recognizing human actions on dynamic binary volumetric (voxel-based) or 3D mesh movement data. The orientation of the human body in each 3D posture is estimated by detecting its feet and this information is used to orient all postures in a consistent manner. K-means is applied on the 3D postures space of the training data to discover characteristic movement patterns namely 3D dynemes. Subsequently, fuzzy vector quantization (FVQ) is utilized to represent each 3D posture in the 3D dynemes space and then information from all time instances is combined to represent the entire action sequence. Linear discriminant analysis (LDA) is then applied. The actual classification step utilizes support vector machines (SVM). Results on a 3D action database verified that the method can achieve good performance.},\n  keywords = {fuzzy set theory;image classification;image motion analysis;learning (artificial intelligence);pose estimation;support vector machines;vector quantisation;human action recognition;dynamic binary volumetric data;3D mesh movement data;voxel-based mesh movement data;3D postures space;training data;characteristic movement pattern discovery;fuzzy vector quantization;FVQ;linear discriminant analysis;LDA;support vector machines;SVM;3D action database;Three-dimensional displays;Vectors;Training;Databases;Foot;Estimation;Support vector machines;human activity recognition;3D data},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569910159.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we propose a method for learning and recognizing human actions on dynamic binary volumetric (voxel-based) or 3D mesh movement data. The orientation of the human body in each 3D posture is estimated by detecting its feet and this information is used to orient all postures in a consistent manner. K-means is applied on the 3D postures space of the training data to discover characteristic movement patterns namely 3D dynemes. Subsequently, fuzzy vector quantization (FVQ) is utilized to represent each 3D posture in the 3D dynemes space and then information from all time instances is combined to represent the entire action sequence. Linear discriminant analysis (LDA) is then applied. The actual classification step utilizes support vector machines (SVM). Results on a 3D action database verified that the method can achieve good performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Shot-based object retrieval from video with compressed Fisher Vectors.\n \n \n \n \n\n\n \n Bertinetto, L.; Fiandrotti, A.; and Magli, E.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2210-2214, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Shot-basedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952802,\n  author = {L. Bertinetto and A. Fiandrotti and E. Magli},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Shot-based object retrieval from video with compressed Fisher Vectors},\n  year = {2014},\n  pages = {2210-2214},\n  abstract = {This paper addresses the problem of retrieving those shots from a database of video sequences that match a query image. Existing architectures match the images using a high-level representation of local features extracted from the video database, and are mainly based on Bag ofWords model. Such architectures lack however the capability to scale up to very large databases. Recently, Fisher Vectors showed promising results in large scale image retrieval problems, but it is still not clear how they can be best exploited in video-related applications. In our work, we use compressed Fisher Vectors to represent the video shots and we show that inherent correlation between video frames can be effectively exploited. Experiments show that our proposed system achieves better performance while having lower computational requirements than similar architectures.},\n  keywords = {feature extraction;image sequences;statistical analysis;video retrieval;shot-based object retrieval;compressed Fisher vectors;video sequences;high-level representation;local feature extraction;bag of words model;large scale image retrieval problems;video shots;Computer architecture;Vectors;Databases;Visualization;Principal component analysis;Histograms;Vocabulary;video retrieval;video search;object retrieval;object search;SIFT descriptors},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925515.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the problem of retrieving those shots from a database of video sequences that match a query image. Existing architectures match the images using a high-level representation of local features extracted from the video database, and are mainly based on Bag ofWords model. Such architectures lack however the capability to scale up to very large databases. Recently, Fisher Vectors showed promising results in large scale image retrieval problems, but it is still not clear how they can be best exploited in video-related applications. In our work, we use compressed Fisher Vectors to represent the video shots and we show that inherent correlation between video frames can be effectively exploited. Experiments show that our proposed system achieves better performance while having lower computational requirements than similar architectures.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A homography-based CDVS pipeline for image matchingwith improved resilience to viewpoint changes.\n \n \n \n\n\n \n Zhao, B.; and Magli, E.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2215-2219, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952803,\n  author = {B. Zhao and E. Magli},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A homography-based CDVS pipeline for image matchingwith improved resilience to viewpoint changes},\n  year = {2014},\n  pages = {2215-2219},\n  abstract = {Compact Descriptors for Visual Search (CDVS) is MPEG proposed standard that will enable efficient and interoperable design of visual search applications using local descriptors. Such descriptors are invariant to rotation and scaling, but are not very robust towards viewpoint changes. In this paper, we address this problem and propose a modified version of the CDVS pipeline that employs image back-projection to compensate for perspective distortion. The proposed technique is based on the homography derived from the correspondence extracted from pairs of matching keypoints. Extensive results show that it improves the CDVS matching accuracy under viewpoint changes while having low complexity.},\n  keywords = {image matching;image representation;image retrieval;search problems;visual databases;image matching;compact descriptors for visual search;homography-based CDVS pipeline;MPEG;visual search applications;image back-projection;CDVS matching accuracy;Pipelines;Image matching;Transform coding;Visualization;Standards;Robustness;Image resolution;CDVS;Content based image retrieval;Homography;SIFT descriptors},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Compact Descriptors for Visual Search (CDVS) is MPEG proposed standard that will enable efficient and interoperable design of visual search applications using local descriptors. Such descriptors are invariant to rotation and scaling, but are not very robust towards viewpoint changes. In this paper, we address this problem and propose a modified version of the CDVS pipeline that employs image back-projection to compensate for perspective distortion. The proposed technique is based on the homography derived from the correspondence extracted from pairs of matching keypoints. Extensive results show that it improves the CDVS matching accuracy under viewpoint changes while having low complexity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Total variation super-resolution for 3D trabecular bone micro-structure segmentation.\n \n \n \n \n\n\n \n Toma, A.; Denis, L.; Sixou, B.; Pialat, J.; and Peyrin, F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2220-2224, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"TotalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952804,\n  author = {A. Toma and L. Denis and B. Sixou and J. Pialat and F. Peyrin},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Total variation super-resolution for 3D trabecular bone micro-structure segmentation},\n  year = {2014},\n  pages = {2220-2224},\n  abstract = {The analysis of the trabecular bone micro-structure plays an important role in studying bone fragility diseases such as osteoporosis. In this context, X-ray CT techniques are increasingly used to image bone micro-architecture. The aim of this paper is to improve the segmentation of the bone micro-structure for further bone quantification. We propose a joint super-resolution/segmentation method based on total variation with a convex constraint. The minimization is performed with the Alternating Direction Method of Multipliers (ADMM). The new method is compared with the bicubic interpolation method and the classical total variation regularization. All methods were tested on blurred, noisy and down-sampled 3D synchrotron micro-CT bone volumes. Improved segmentation is obtained with the proposed joint super-resolution/segmentation method.},\n  keywords = {bone;image segmentation;interpolation;medical image processing;total variation regularization;bicubic interpolation method;alternating direction method of multipliers;convex constraint;joint superresolution segmentation method;bone microarchitecture;X-ray CT techniques;osteoporosis;3D trabecular bone microstructure segmentation;total variation superresolution;Bones;Image segmentation;TV;Spatial resolution;Three-dimensional displays;Image restoration;segmentation;super-resolution;3D trabecular micro-structure;TV regularization;CT images},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569927039.pdf},\n}\n\n
\n
\n\n\n
\n The analysis of the trabecular bone micro-structure plays an important role in studying bone fragility diseases such as osteoporosis. In this context, X-ray CT techniques are increasingly used to image bone micro-architecture. The aim of this paper is to improve the segmentation of the bone micro-structure for further bone quantification. We propose a joint super-resolution/segmentation method based on total variation with a convex constraint. The minimization is performed with the Alternating Direction Method of Multipliers (ADMM). The new method is compared with the bicubic interpolation method and the classical total variation regularization. All methods were tested on blurred, noisy and down-sampled 3D synchrotron micro-CT bone volumes. Improved segmentation is obtained with the proposed joint super-resolution/segmentation method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Oversampled graph laplacian matrix for graph signals.\n \n \n \n \n\n\n \n Sakiyama, A.; and Tanaka, Y.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2225-2229, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OversampledPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952805,\n  author = {A. Sakiyama and Y. Tanaka},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Oversampled graph laplacian matrix for graph signals},\n  year = {2014},\n  pages = {2225-2229},\n  abstract = {In this paper, we propose oversampling of graph signals by using oversampled graph Laplacian matrix. The conventional critically sampled graph filter banks have to decompose an original graph into bipartite subgraphs, and the transform has to be performed on each subgraph due to the spectral folding phenomenon caused by downsampling of graph signals. Therefore, they cannot always utilize all edges of the original graph for the one-stage transformation. Our proposed method is based on oversampling of the underlying graph itself, and it can append nodes and edges to the graph somewhat arbitrarily. We use this approach to make one oversampled bipartite graph that includes all edges of the original non-bipartite graph. We apply the oversampled graph with the critically sampled filter bank for decomposing graph signals, and show the performance of graph signal denoising.},\n  keywords = {channel bank filters;graph theory;Laplace transforms;matrix decomposition;signal sampling;oversampled graph Laplacian matrix;graph signal oversampling;critically sampled graph filter banks;spectral folding phenomenon;graph signal downsampling;one-stage transformation;oversampled bipartite graph;graph signal decomposition;graph signal denoising;Bipartite graph;Laplace equations;Wavelet transforms;Matrix decomposition;Signal denoising;Graph signal processing;graph oversampling;multiresolution;spectral graph theory;graph wavelets},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569909425.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose oversampling of graph signals by using oversampled graph Laplacian matrix. The conventional critically sampled graph filter banks have to decompose an original graph into bipartite subgraphs, and the transform has to be performed on each subgraph due to the spectral folding phenomenon caused by downsampling of graph signals. Therefore, they cannot always utilize all edges of the original graph for the one-stage transformation. Our proposed method is based on oversampling of the underlying graph itself, and it can append nodes and edges to the graph somewhat arbitrarily. We use this approach to make one oversampled bipartite graph that includes all edges of the original non-bipartite graph. We apply the oversampled graph with the critically sampled filter bank for decomposing graph signals, and show the performance of graph signal denoising.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Semi-deterministic ternary matrix for compressed sensing.\n \n \n \n \n\n\n \n Lu, W.; Kpalma, K.; and Ronsin, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2230-2234, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Semi-deterministicPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952806,\n  author = {W. Lu and K. Kpalma and J. Ronsin},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Semi-deterministic ternary matrix for compressed sensing},\n  year = {2014},\n  pages = {2230-2234},\n  abstract = {For the random {0,±1} ternary matrix, it is interesting to determine the number of nonzero elements required for good compressed sensing performance. By seeking the best RIP, this paper proposes a semi-deterministic ternary matrix, which is of deterministic nonzero positions but random signs. In practice, it presents better performance than common random ternary matrices and Gaussian random matrices.},\n  keywords = {compressed sensing;matrix algebra;nonzero elements;compressed sensing;semideterministic ternary matrix;Sparse matrices;Sensors;Compressed sensing;Eigenvalues and eigenfunctions;Symmetric matrices;Vectors;Indexes;random matrix;ternary matrix;compressed sensing;RIP;deterministic;semi-deterministic},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569909677.pdf},\n}\n\n
\n
\n\n\n
\n For the random 0,±1 ternary matrix, it is interesting to determine the number of nonzero elements required for good compressed sensing performance. By seeking the best RIP, this paper proposes a semi-deterministic ternary matrix, which is of deterministic nonzero positions but random signs. In practice, it presents better performance than common random ternary matrices and Gaussian random matrices.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Search for Costas arrays via sparse representation.\n \n \n \n \n\n\n \n Soltanalian, M.; Stoica, P.; and Li, J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2235-2239, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SearchPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952807,\n  author = {M. Soltanalian and P. Stoica and J. Li},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Search for Costas arrays via sparse representation},\n  year = {2014},\n  pages = {2235-2239},\n  abstract = {Costas arrays are mainly known as a certain type of optimized time-frequency coding pattern for sonar and radar. In order to fulfill the need for effective computational approaches to find Costas arrays, in this paper, we propose a sparse formulation of the Costas array search problem. The new sparse representation can pave the way for using an extensive number of methods offered by the sparse signal recovery literature. It is further shown that Costas arrays can be obtained using an equivalent quadratic program with linear constraints. A numerical approach is devised and used to illustrate the performance of the proposed formulations.},\n  keywords = {quadratic programming;radar;signal representation;sonar;time-frequency analysis;optimized time-frequency coding pattern;sonar;radar;Costas array search problem;sparse representation;sparse signal recovery literature;quadratic program;linear constraints;Vectors;Search problems;Arrays;Radar;Array signal processing;Linear systems;Sonar;Code design;Costas arrays;frequency hopping;radar codes;sparsity},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569912093.pdf},\n}\n\n
\n
\n\n\n
\n Costas arrays are mainly known as a certain type of optimized time-frequency coding pattern for sonar and radar. In order to fulfill the need for effective computational approaches to find Costas arrays, in this paper, we propose a sparse formulation of the Costas array search problem. The new sparse representation can pave the way for using an extensive number of methods offered by the sparse signal recovery literature. It is further shown that Costas arrays can be obtained using an equivalent quadratic program with linear constraints. A numerical approach is devised and used to illustrate the performance of the proposed formulations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Recursive blind equalization with an optimal bounding ellipsoid algorithm.\n \n \n \n \n\n\n \n Pouliquen, M.; Frikel, M.; and Denoual, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2240-2244, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RecursivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952808,\n  author = {M. Pouliquen and M. Frikel and M. Denoual},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Recursive blind equalization with an optimal bounding ellipsoid algorithm},\n  year = {2014},\n  pages = {2240-2244},\n  abstract = {In this paper, we present an algorithm for blind equalization i.e. equalization without training sequence. The proposed algorithm is based on the reformulation of the equalization problem in a set membership identification problem. Among the Set Membership Identification methods, the chosen algorithm is an optimal bounding ellipsoid type algorithm. This algorithm has a low computational burden which allows to use it easily in real time. Note that in this paper the equalizer is a finite impulse response filter. An analysis of the algorithm is provided. In order to show the good performance of the proposed approach some simulations are performed.},\n  keywords = {blind equalisers;FIR filters;recursive blind equalization;optimal bounding ellipsoid algorithm;set membership identification problem;finite impulse response filter;Abstracts;Finite impulse response filters;Robustness;Blind Equalization;FIR equalizer},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917435.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present an algorithm for blind equalization i.e. equalization without training sequence. The proposed algorithm is based on the reformulation of the equalization problem in a set membership identification problem. Among the Set Membership Identification methods, the chosen algorithm is an optimal bounding ellipsoid type algorithm. This algorithm has a low computational burden which allows to use it easily in real time. Note that in this paper the equalizer is a finite impulse response filter. An analysis of the algorithm is provided. In order to show the good performance of the proposed approach some simulations are performed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On almost sure identifiability of non multilinear tensor decomposition.\n \n \n \n \n\n\n \n Cohen, J.; and Comon, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2245-2249, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952809,\n  author = {J. Cohen and P. Comon},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {On almost sure identifiability of non multilinear tensor decomposition},\n  year = {2014},\n  pages = {2245-2249},\n  abstract = {Uniqueness of tensor decompositions is of crucial importance in numerous engineering applications. Extensive work in algebraic geometry has given various bounds involving tensor rank and dimensions to ensure generic identifiability. However, most of this work is hardly accessible to non-specialists, and does not apply to non-multilinear models. In this paper, we present another approach, using the Jacobian of the model. The latter sheds a new light on bounds and exceptions previously obtained. Finally, the method proposed is applied to a non-multilinear decomposition used in fluorescence spectrometry, which permits to state generic local identifiability.},\n  keywords = {chemistry computing;computational geometry;Jacobian matrices;tensors;generic local identifiability;fluorescence spectrometry;Jacobian model;tensor rank;algebraic geometry;nonmultilinear tensor decomposition;Jacobian matrices;Tensile stress;Mathematical model;Vectors;Matrix decomposition;Approximation methods;Equations},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923331.pdf},\n}\n\n
\n
\n\n\n
\n Uniqueness of tensor decompositions is of crucial importance in numerous engineering applications. Extensive work in algebraic geometry has given various bounds involving tensor rank and dimensions to ensure generic identifiability. However, most of this work is hardly accessible to non-specialists, and does not apply to non-multilinear models. In this paper, we present another approach, using the Jacobian of the model. The latter sheds a new light on bounds and exceptions previously obtained. Finally, the method proposed is applied to a non-multilinear decomposition used in fluorescence spectrometry, which permits to state generic local identifiability.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Greedy Orthogonal Matching Pursuit for sparse target detection and counting in WSN.\n \n \n \n \n\n\n \n Jellali, Z.; Atallah, L. N.; and Cherif, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2250-2254, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"GreedyPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952810,\n  author = {Z. Jellali and L. N. Atallah and S. Cherif},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Greedy Orthogonal Matching Pursuit for sparse target detection and counting in WSN},\n  year = {2014},\n  pages = {2250-2254},\n  abstract = {The recently emerged Compressed Sensing (CS) theory has widely addressed the problem of sparse targets detection in Wireless Sensor Networks (WSN) in the aim of reducing the deployment cost and energy consumption. In this paper, we apply CS approach for both sparse events recovery and counting. We first propose a novel Greedy version of the Orthogonal Matching Pursuit (GOMP) algorithm allowing to account for the decomposition matrix non orthogonality. Then, in order to reduce the GOMP computational load, we propose a two-stages version of GOMP, the 2S-GOMP, which separates the events detection and counting steps. Simulation results show that the proposed algorithms achieve a better tradeoff between performance and computational load when compared to the recently proposed GMP algorithm and its two stages version denoted 2S-GMP.},\n  keywords = {compressed sensing;greedy algorithms;iterative methods;matrix decomposition;signal detection;wireless sensor networks;greedy orthogonal matching pursuit algorithm;sparse target detection;WSN;compressed sensing theory;CS theory;wireless sensor networks;deployment cost;energy consumption;CS approach;sparse event recovery;GOMP algorithm;decomposition matrix nonorthogonality;GOMP computational load reduction;2S-GOMP;event detection;Matching pursuit algorithms;Signal to noise ratio;Compressed sensing;Complexity theory;Vectors;Event detection;Wireless sensor network;rare events detection;Compressed Sensing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924665.pdf},\n}\n\n
\n
\n\n\n
\n The recently emerged Compressed Sensing (CS) theory has widely addressed the problem of sparse targets detection in Wireless Sensor Networks (WSN) in the aim of reducing the deployment cost and energy consumption. In this paper, we apply CS approach for both sparse events recovery and counting. We first propose a novel Greedy version of the Orthogonal Matching Pursuit (GOMP) algorithm allowing to account for the decomposition matrix non orthogonality. Then, in order to reduce the GOMP computational load, we propose a two-stages version of GOMP, the 2S-GOMP, which separates the events detection and counting steps. Simulation results show that the proposed algorithms achieve a better tradeoff between performance and computational load when compared to the recently proposed GMP algorithm and its two stages version denoted 2S-GMP.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse vector sensor array design based on quaternionic formulations.\n \n \n \n \n\n\n \n Hawes, M. B.; and Liu, W.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2255-2259, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952811,\n  author = {M. B. Hawes and W. Liu},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse vector sensor array design based on quaternionic formulations},\n  year = {2014},\n  pages = {2255-2259},\n  abstract = {In sparse arrays, the randomness of sensor locations avoids the introduction of grating lobes, while allowing adjacent sensor spacings to be greater than half a wavelength, leading to a larger array size with a relatively small number of sensors. In this paper, for the first time, the design of both robust and non-robust sparse vector sensor arrays is studied, and the proposed method is based on quaternionic formulations. It is a further extension of the recently proposed compressive sensing (CS) based design for traditional sparse arrays and the vector sensors being considered are crossed-dipoles. Design examples are presented to validate the effectiveness of the proposed method.},\n  keywords = {array signal processing;compressed sensing;sensor arrays;robust sparse vector sensor array design;quaternionic formulations;sensor location randomness;grating lobes;adjacent sensor spacings;nonrobust sparse vector sensor arrays;compressive sensing;CS based design;crossed-dipoles;Arrays;Vectors;Robustness;Quaternions;Compressed sensing;Signal processing;Antenna arrays;Sparse array;quaternion beamformer;vector sensor array;compressive sensing;steering vector error},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924771.pdf},\n}\n\n
\n
\n\n\n
\n In sparse arrays, the randomness of sensor locations avoids the introduction of grating lobes, while allowing adjacent sensor spacings to be greater than half a wavelength, leading to a larger array size with a relatively small number of sensors. In this paper, for the first time, the design of both robust and non-robust sparse vector sensor arrays is studied, and the proposed method is based on quaternionic formulations. It is a further extension of the recently proposed compressive sensing (CS) based design for traditional sparse arrays and the vector sensors being considered are crossed-dipoles. Design examples are presented to validate the effectiveness of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Minimal solutions for dual microphone rig self-calibration.\n \n \n \n \n\n\n \n Zhayida, S.; Burgess, S.; Kuang, Y.; and Åström, K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2260-2264, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"MinimalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952812,\n  author = {S. Zhayida and S. Burgess and Y. Kuang and K. Åström},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Minimal solutions for dual microphone rig self-calibration},\n  year = {2014},\n  pages = {2260-2264},\n  abstract = {In this paper, we study minimal problems related to dual microphone rig self-calibration using TOA measurements from sound sources with unknown positions. We consider the problems with varying setups as (i) if the internal distances between the microphone nodes are known a priori or not. (ii) if the microphone rigs lies in an affine space with different dimension than the sound sources. Solving these minimal problems is essential to robust estimation of microphone and sound source locations. We identify for each of these minimal problems the number of solutions in general and develop non-iterative solvers. We show that the proposed solvers are numerically stable in synthetic experiments. We also apply our method in a real indoor experiment and obtain accurate reconstruction using TOA measurements.},\n  keywords = {acoustic generators;acoustic radiators;calibration;microphones;time-of-arrival estimation;dual microphone rig self-calibration;TOA measurements;affine space;sound source locations;microphone locations;non-iterative solvers;time-of-arrival measurements;Receivers;Microphones;Calibration;Three-dimensional displays;Position measurement;Polynomials},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925185.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we study minimal problems related to dual microphone rig self-calibration using TOA measurements from sound sources with unknown positions. We consider the problems with varying setups as (i) if the internal distances between the microphone nodes are known a priori or not. (ii) if the microphone rigs lies in an affine space with different dimension than the sound sources. Solving these minimal problems is essential to robust estimation of microphone and sound source locations. We identify for each of these minimal problems the number of solutions in general and develop non-iterative solvers. We show that the proposed solvers are numerically stable in synthetic experiments. We also apply our method in a real indoor experiment and obtain accurate reconstruction using TOA measurements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Generalized MNS method for parallel minor and principal subspace analysis.\n \n \n \n \n\n\n \n Nguyen, V.; Abed-Meraim, K.; Linh-Trung, N.; and Weber, R.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2265-2269, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"GeneralizedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952833,\n  author = {V. Nguyen and K. Abed-Meraim and N. Linh-Trung and R. Weber},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Generalized MNS method for parallel minor and principal subspace analysis},\n  year = {2014},\n  pages = {2265-2269},\n  abstract = {This paper introduces a generalized minimum noise subspace method for the fast estimation of the minor or principal subspaces for large dimensional multi-sensor systems. In particular, the proposed method allows parallel computation of the desired subspace when K > 1 computational units (DSPs) are available in a parallel architecture. The overall numerical cost is approximately reduced by a factor of K2 while preserving the estimation accuracy close to optimality. Different algorithm implementations are considered and their performance is assessed through numerical simulation.},\n  keywords = {approximation theory;estimation theory;sensor fusion;generalized MNS method;principal subspace analysis;parallel minor analysis;generalized minimum noise subspace method;principal subspace fast estimation;minor subspace fast estimation;large dimensional multisensor systems;computational units;DSPs;parallel architecture;numerical cost;numerical simulation;Estimation;Covariance matrices;Vectors;Signal to noise ratio;Accuracy;Algorithm design and analysis},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925309.pdf},\n}\n\n
\n
\n\n\n
\n This paper introduces a generalized minimum noise subspace method for the fast estimation of the minor or principal subspaces for large dimensional multi-sensor systems. In particular, the proposed method allows parallel computation of the desired subspace when K > 1 computational units (DSPs) are available in a parallel architecture. The overall numerical cost is approximately reduced by a factor of K2 while preserving the estimation accuracy close to optimality. Different algorithm implementations are considered and their performance is assessed through numerical simulation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Sparsity-aided radarwaveform synthesis.\n \n \n \n\n\n \n Hu, H.; Soltanalian, M.; Stoica, P.; and Zhu, X.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2270-2274, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952834,\n  author = {H. Hu and M. Soltanalian and P. Stoica and X. Zhu},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Sparsity-aided radarwaveform synthesis},\n  year = {2014},\n  pages = {2270-2274},\n  abstract = {Owing to the inherent sparsity of the target scene, compressed sensing (CS) has been successfully employed in radar applications. It is known that the performance of target scene recovery in CS scenarios depends highly on the coherence of the sensing matrix (CSM), which is determined by the radar transmit waveform. In this paper, we present a cyclic optimization algorithm to effectively reduce the CSM via a judicious design of the radar waveform. The proposed method provides a reduction in the size of the Gram matrix associated with the sensing matrix, and moreover, relies on the fast Fourier transform (FFT) operations to improve the computation speed. As a result, the suggested algorithm can be used for large dimension designs (with ≲ 100 variables) even on an ordinary PC. The effectiveness of the proposed algorithm is illustrated through numerical examples.},\n  keywords = {compressed sensing;fast Fourier transforms;optimisation;radar signal processing;sparse matrices;sensing matrix coherence;sparsity-aided radar waveform synthesis;radar transmit waveform;fast Fourier transform;Gram matrix;cyclic optimization algorithm;compressed sensing;Coherence;Sensors;Compressed sensing;Vectors;Algorithm design and analysis;MIMO radar;compressed sensing;mutual coherence;radar;sensing matrix;sparsity;waveform synthesis},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Owing to the inherent sparsity of the target scene, compressed sensing (CS) has been successfully employed in radar applications. It is known that the performance of target scene recovery in CS scenarios depends highly on the coherence of the sensing matrix (CSM), which is determined by the radar transmit waveform. In this paper, we present a cyclic optimization algorithm to effectively reduce the CSM via a judicious design of the radar waveform. The proposed method provides a reduction in the size of the Gram matrix associated with the sensing matrix, and moreover, relies on the fast Fourier transform (FFT) operations to improve the computation speed. As a result, the suggested algorithm can be used for large dimension designs (with ≲ 100 variables) even on an ordinary PC. The effectiveness of the proposed algorithm is illustrated through numerical examples.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n How to localize ten microphones in one finger snap.\n \n \n \n \n\n\n \n Dokmanić, I.; Daudet, L.; and Vetterli, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2275-2279, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"HowPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952835,\n  author = {I. Dokmanić and L. Daudet and M. Vetterli},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {How to localize ten microphones in one finger snap},\n  year = {2014},\n  pages = {2275-2279},\n  abstract = {A compelling method to calibrate the positions of microphones in an array is with sources at unknown locations. Remarkably, it is possible to reconstruct the locations of both the sources and the receivers, if their number is larger than some prescribed minimum [1, 2]. Existing methods, based on times of arrival or time differences of arrival, only exploit the direct paths between the sources and the receivers. In this proof-of-concept paper, we observe that by placing the whole setup inside a room, we can reduce the number of sources required for calibration. Moreover, our technique allows us to compute the absolute position of the microphone array in the room, as opposed to knowing it up to a rigid transformation or reflection. The key observation is that echoes correspond to virtual sources that we get “for free”. This enables endeavors such as calibrating the array using only a single source.},\n  keywords = {calibration;microphone arrays;time-of-arrival estimation;times of arrival;time differences of arrival;microphone array;Microphones;Arrays;Calibration;Acoustics;Position measurement;Shape;Geometry;Localization;array calibration;indoor calibration;echo sorting;microphone array},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925541.pdf},\n}\n\n
\n
\n\n\n
\n A compelling method to calibrate the positions of microphones in an array is with sources at unknown locations. Remarkably, it is possible to reconstruct the locations of both the sources and the receivers, if their number is larger than some prescribed minimum [1, 2]. Existing methods, based on times of arrival or time differences of arrival, only exploit the direct paths between the sources and the receivers. In this proof-of-concept paper, we observe that by placing the whole setup inside a room, we can reduce the number of sources required for calibration. Moreover, our technique allows us to compute the absolute position of the microphone array in the room, as opposed to knowing it up to a rigid transformation or reflection. The key observation is that echoes correspond to virtual sources that we get “for free”. This enables endeavors such as calibrating the array using only a single source.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Information-based pool size control of Boolean compressive sensing for adaptive group testing.\n \n \n \n \n\n\n \n Kawaguchi, Y.; Osa, T.; Barnwal, S.; Nagano, H.; and Togami, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2280-2284, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Information-basedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952836,\n  author = {Y. Kawaguchi and T. Osa and S. Barnwal and H. Nagano and M. Togami},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Information-based pool size control of Boolean compressive sensing for adaptive group testing},\n  year = {2014},\n  pages = {2280-2284},\n  abstract = {A new method for solving the adaptive-group-testing probis proposed. To solve the problem that the conventional method for non-adaptive group testing by Boolean compressive sensing needs a larger number of tests when the pool size is not optimized, the proposed method controls the pool size for each test. The control criterion is the expected information gain that can be calculated from the l0 norm of the estimated solution. Experimental simulation indicates that the proposed method outperforms the conventional method even when the number of defective items is varied and the number of defective items is unknown.},\n  keywords = {Boolean functions;compressed sensing;information-based pool size control;Boolean compressive sensing;adaptive group testing problem;defective items;Testing;Abstracts;Yttrium;Robustness;adaptive group testing;compressive sensing;information gain;entropy;sparse signal processing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925663.pdf},\n}\n\n
\n
\n\n\n
\n A new method for solving the adaptive-group-testing probis proposed. To solve the problem that the conventional method for non-adaptive group testing by Boolean compressive sensing needs a larger number of tests when the pool size is not optimized, the proposed method controls the pool size for each test. The control criterion is the expected information gain that can be calculated from the l0 norm of the estimated solution. Experimental simulation indicates that the proposed method outperforms the conventional method even when the number of defective items is varied and the number of defective items is unknown.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Leak detection and localization inwater distribution system using time frequency analysis.\n \n \n \n\n\n \n Thein Zan, T. T.; Wong, K. -.; Lim, H. -.; Whittle, A. J.; and Lee, B. -.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2285-2289, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952837,\n  author = {T. T. {Thein Zan} and K. -J. Wong and H. -B. Lim and A. J. Whittle and B. -S. Lee},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Leak detection and localization inwater distribution system using time frequency analysis},\n  year = {2014},\n  pages = {2285-2289},\n  abstract = {Water loss through burst events or leaks is a significant problem affecting water utilities worldwide and is exacerbated by deterioration of the underground infrastructure. This paper shall report on our method to localize the source of a pipe burst by estimating the arrival time of the pressure transients at sensor nodes. Our proposed method uses Short Time Fourier Transform that has shown to overcome the limitation of Fourier Transform temporal deficiency. The paper will in addition report on the results obtained from a real leakage data obtained on the WaterWiSe@SG test-bed, which shows the superiority of our method compared to multi-level wavelet transform.},\n  keywords = {Fourier transforms;leak detection;nonelectric sensing devices;pipelines;time-frequency analysis;WaterWiSe SG test-bed;short time Fourier transform;sensor nodes;pressure transients;pipe burst;underground infrastructure;water utilities;burst events;water loss;time frequency analysis;leak localization;leak detection;water distribution system;Time-frequency analysis;Estimation;Transient analysis;Accuracy;Fourier transforms;Joints;Noise;event localization;transient detection;time frequency analysis;Short Time Fourier Transform},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Water loss through burst events or leaks is a significant problem affecting water utilities worldwide and is exacerbated by deterioration of the underground infrastructure. This paper shall report on our method to localize the source of a pipe burst by estimating the arrival time of the pressure transients at sensor nodes. Our proposed method uses Short Time Fourier Transform that has shown to overcome the limitation of Fourier Transform temporal deficiency. The paper will in addition report on the results obtained from a real leakage data obtained on the WaterWiSe@SG test-bed, which shows the superiority of our method compared to multi-level wavelet transform.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Voice source modelling using deep neural networks for statistical parametric speech synthesis.\n \n \n \n \n\n\n \n Raitio, T.; Lu, H.; Kane, J.; Suni, A.; Vainio, M.; King, S.; and Alku, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2290-2294, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"VoicePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952838,\n  author = {T. Raitio and H. Lu and J. Kane and A. Suni and M. Vainio and S. King and P. Alku},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Voice source modelling using deep neural networks for statistical parametric speech synthesis},\n  year = {2014},\n  pages = {2290-2294},\n  abstract = {This paper presents a voice source modelling method employing a deep neural network (DNN) to map from acoustic features to the time-domain glottal flow waveform. First, acoustic features and the glottal flow signal are estimated from each frame of the speech database. Pitch-synchronous glottal flow time-domain waveforms are extracted, interpolated to a constant duration, and stored in a codebook. Then, a DNN is trained to map from acoustic features to these duration-normalised glottal waveforms. At synthesis time, acoustic features are generated froma statistical parametricmodel, and from these, the trained DNN predicts the glottal flow waveform. Illustrations are provided to demonstrate that the proposed method successfully synthesises the glottal flow waveform and enables easy modification of the waveform by adjusting the input values to the DNN. In a subjective listening test, the proposed method was rated as equal to a high-quality method employing a stored glottal flow waveform.},\n  keywords = {acoustic signal processing;neural nets;speech synthesis;statistical analysis;time-domain analysis;waveform analysis;voice source modelling;deep neural networks;statistical parametric speech synthesis;DNN;acoustic features;time-domain glottal flow waveform;glottal flow signal;speech database;Hidden Markov models;Speech;Speech synthesis;Acoustics;Feature extraction;Training;Neural networks;Deep neural network;DNN;voice source modelling;glottal flow;statistical parametric speech synthesis},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569908305.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a voice source modelling method employing a deep neural network (DNN) to map from acoustic features to the time-domain glottal flow waveform. First, acoustic features and the glottal flow signal are estimated from each frame of the speech database. Pitch-synchronous glottal flow time-domain waveforms are extracted, interpolated to a constant duration, and stored in a codebook. Then, a DNN is trained to map from acoustic features to these duration-normalised glottal waveforms. At synthesis time, acoustic features are generated froma statistical parametricmodel, and from these, the trained DNN predicts the glottal flow waveform. Illustrations are provided to demonstrate that the proposed method successfully synthesises the glottal flow waveform and enables easy modification of the waveform by adjusting the input values to the DNN. In a subjective listening test, the proposed method was rated as equal to a high-quality method employing a stored glottal flow waveform.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An improved chirp group delay based algorithm for estimating the vocal tract response.\n \n \n \n \n\n\n \n Jayesh, M. K.; and Ramalingam, C. S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2295-2299, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952839,\n  author = {M. K. Jayesh and C. S. Ramalingam},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {An improved chirp group delay based algorithm for estimating the vocal tract response},\n  year = {2014},\n  pages = {2295-2299},\n  abstract = {We propose a method for vocal tract estimation that is better than Bozkurt's chirp group delay method [1] and its zero-phase variant [2]. The chirp group delay method works only for voiced speech, is critically dependent on finding the glottal closure instants (GCI), deteriorates in performance when more than two pitch cycles are included for analysis, and does not work for unvoiced speech. The zero-phase variant eliminates these drawbacks but works poorly for nasal sounds. In our proposed method all outside-unit-circle zeros are reflected inside before computing the chirp group delay. The advantages are: (a) GCI knowledge not required, (b) the vocal tract estimate is far less sensitive to the location and duration of the analysis window, (c) works for unvoiced sounds, and (d) captures the spectral valleys well for nasals, which in turn leads to better recognition accuracy.},\n  keywords = {feature extraction;Fourier transforms;speech processing;speech recognition;vectors;improved chirp group delay based algorithm;vocal tract response estimation;glottal closure instants;zero-phase variant;outside-unit-circle zeros;analysis window location;analysis window duration;speech processing methods;feature vector extraction;transform magnitude;Chirp;Delays;Speech;Noise;Speech processing;Natural languages;Estimation;vocal tract estimation;group delay},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926837.pdf},\n}\n\n
\n
\n\n\n
\n We propose a method for vocal tract estimation that is better than Bozkurt's chirp group delay method [1] and its zero-phase variant [2]. The chirp group delay method works only for voiced speech, is critically dependent on finding the glottal closure instants (GCI), deteriorates in performance when more than two pitch cycles are included for analysis, and does not work for unvoiced speech. The zero-phase variant eliminates these drawbacks but works poorly for nasal sounds. In our proposed method all outside-unit-circle zeros are reflected inside before computing the chirp group delay. The advantages are: (a) GCI knowledge not required, (b) the vocal tract estimate is far less sensitive to the location and duration of the analysis window, (c) works for unvoiced sounds, and (d) captures the spectral valleys well for nasals, which in turn leads to better recognition accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Audiovisual to area and length functions inversion of human vocal tract.\n \n \n \n \n\n\n \n Elie, B.; and Laprie, Y.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2300-2304, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AudiovisualPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952840,\n  author = {B. Elie and Y. Laprie},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Audiovisual to area and length functions inversion of human vocal tract},\n  year = {2014},\n  pages = {2300-2304},\n  abstract = {This paper proposes a multimodal approach to estimate the area function and the length of the vocal tract of oral vowels. The method is based on an iterative technique consisting in deforming an initial area function so that the output acoustic vector matches a specified target. The chosen acoustic vector is the formant frequency pattern. In order to regularize the ill-problem, several constraints are added to the algorithm. First, the lip termination area is estimated via a facial capture software. Then, the area function is constrained in such a way that it does not get too far from a neutral position, and it does not change too quickly from a temporal frame to the next, when dealing with dynamic inversion. The method proves to be efficient to approximate the area function and the length of the vocal tract for oral french vowels, both in static and dynamic configurations.},\n  keywords = {acoustic signal processing;audio-visual systems;iterative methods;speech processing;length function inversion;area function inversion;human vocal tract;multimodal approach;iterative technique;acoustic vector;formant frequency pattern;lip termination area;facial capture software;oral french vowels;dynamic configurations;static configurations;audiovisual inversion;Acoustics;Speech;Vectors;Estimation;Sensitivity;Frequency measurement;Apertures;Audiovisual inversion;Vocal tract length;Regularization;Dynamic inversion},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922791.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a multimodal approach to estimate the area function and the length of the vocal tract of oral vowels. The method is based on an iterative technique consisting in deforming an initial area function so that the output acoustic vector matches a specified target. The chosen acoustic vector is the formant frequency pattern. In order to regularize the ill-problem, several constraints are added to the algorithm. First, the lip termination area is estimated via a facial capture software. Then, the area function is constrained in such a way that it does not get too far from a neutral position, and it does not change too quickly from a temporal frame to the next, when dealing with dynamic inversion. The method proves to be efficient to approximate the area function and the length of the vocal tract for oral french vowels, both in static and dynamic configurations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A speech presence probability estimator based on fixed priors and a heavy-tailed speech model.\n \n \n \n \n\n\n \n Fodor, B.; and Gerkmann, T.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2305-2309, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952841,\n  author = {B. Fodor and T. Gerkmann},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A speech presence probability estimator based on fixed priors and a heavy-tailed speech model},\n  year = {2014},\n  pages = {2305-2309},\n  abstract = {Speech enhancement approaches are often enhanced by speech presence probability (SPP) estimation. However, SPP estimators suffer from random fluctuations of the a posteriori signal-to-noise ratio (SNR). While there exist proposals that overcome the random fluctuations by basing the SPP framework on smoothed observations, these approaches do not take into account the super-Gaussian nature of speech signals. Thus, in this paper we define a framework that allows for modeling the likelihoods of speech presence for smoothed observations, while at the same time assuming super-Gaussian speech coefficients. The proposed approach is shown to outperform the reference approaches in terms of the amount of noise leakage and the amount of musical noise.},\n  keywords = {probability;speech enhancement;speech presence probability estimator;speech enhancement;signal-to-noise ratio;SNR;super-Gaussian nature;speech signals;super-Gaussian speech coefficients;noise leakage;musical noise;Speech;Signal to noise ratio;Speech enhancement;Estimation;Shape},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569914505.pdf},\n}\n\n
\n
\n\n\n
\n Speech enhancement approaches are often enhanced by speech presence probability (SPP) estimation. However, SPP estimators suffer from random fluctuations of the a posteriori signal-to-noise ratio (SNR). While there exist proposals that overcome the random fluctuations by basing the SPP framework on smoothed observations, these approaches do not take into account the super-Gaussian nature of speech signals. Thus, in this paper we define a framework that allows for modeling the likelihoods of speech presence for smoothed observations, while at the same time assuming super-Gaussian speech coefficients. The proposed approach is shown to outperform the reference approaches in terms of the amount of noise leakage and the amount of musical noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Novel topic n-gram count LM incorporating document-based topic distributions and n-gram counts.\n \n \n \n \n\n\n \n Haidar, M. A.; and O'Shaughnessy, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2310-2314, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"NovelPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952842,\n  author = {M. A. Haidar and D. O'Shaughnessy},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Novel topic n-gram count LM incorporating document-based topic distributions and n-gram counts},\n  year = {2014},\n  pages = {2310-2314},\n  abstract = {In this paper, we introduce a novel topic n-gram count language model (NTNCLM) using topic probabilities of training documents and document-based n-gram counts. The topic probabilities for the documents are computed by averaging the topic probabilities of words seen in the documents. The topic probabilities of documents are multiplied by the document-based n-gram counts. The products are then summed-up for all the training documents. The results are used as the counts of the respective topics to create the NTNCLMs. The NTNCLMs are adapted by using the topic probabilities of a development test set that are computed as above. We compare our approach with a recently proposed TNCLM [1], where the long-range information outside of the n-gram events is not encountered. Our approach yields significant perplexity and word error rate (WER) reductions over the other approach using the Wall Street Journal (WSJ) corpus.},\n  keywords = {document handling;natural language processing;speech processing;topic n-gram count LM;document-based topic distributions;topic n-gram count language model;NTNCLM;topic probabilities;training documents;document-based n-gram counts;long-range information;word error rate;WER reductions;Wall Street Journal;WSJ corpus;Adaptation models;Mathematical model;Training;Computational modeling;Semantics;Interpolation;Speech recognition;Statistical n-gram language model;speech recognition;mixture models;topic models},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569909989.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we introduce a novel topic n-gram count language model (NTNCLM) using topic probabilities of training documents and document-based n-gram counts. The topic probabilities for the documents are computed by averaging the topic probabilities of words seen in the documents. The topic probabilities of documents are multiplied by the document-based n-gram counts. The products are then summed-up for all the training documents. The results are used as the counts of the respective topics to create the NTNCLMs. The NTNCLMs are adapted by using the topic probabilities of a development test set that are computed as above. We compare our approach with a recently proposed TNCLM [1], where the long-range information outside of the n-gram events is not encountered. Our approach yields significant perplexity and word error rate (WER) reductions over the other approach using the Wall Street Journal (WSJ) corpus.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Human motion detection in daily activity tasks using wearable sensors.\n \n \n \n \n\n\n \n Politi, O.; Mporas, I.; and Megalooikonomou, V.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2315-2319, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"HumanPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952843,\n  author = {O. Politi and I. Mporas and V. Megalooikonomou},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Human motion detection in daily activity tasks using wearable sensors},\n  year = {2014},\n  pages = {2315-2319},\n  abstract = {In this article we present a human motion detection frame-work, based on data derived from a single tri-axial accelerometer. The framework uses a set of different pre-processing methods that produce data representations which are respectively parameterized by statistical and physical features. These features are then concatenated and classified using well-known classification algorithms for the problem of motion recognition. Experimental evaluation was carried out according to a subject-dependent scenario, meaning that the classification is performed for each subject separately using their own data and the average accuracy for all individuals is computed. The best achieved detection performance for 14 everyday human motion activities, using the USC-HAD database, was approximately 95%. The results compare favorably are competitive to the best reported performance of 93.1% for the same database.},\n  keywords = {accelerometers;data structures;image classification;image motion analysis;object detection;object recognition;sensors;statistical analysis;human motion detection framework;wearable sensors;single triaxial accelerometer;data representations;statistical features;physical features;classification algorithms;motion recognition problem;subject-dependent scenario;USC-HAD database;Classification algorithms;Feature extraction;Motion detection;Accuracy;Sensors;Support vector machine classification;Accelerometers;wearable sensors;movement classification;human motion recognition;daily activity},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925557.pdf},\n}\n\n
\n
\n\n\n
\n In this article we present a human motion detection frame-work, based on data derived from a single tri-axial accelerometer. The framework uses a set of different pre-processing methods that produce data representations which are respectively parameterized by statistical and physical features. These features are then concatenated and classified using well-known classification algorithms for the problem of motion recognition. Experimental evaluation was carried out according to a subject-dependent scenario, meaning that the classification is performed for each subject separately using their own data and the average accuracy for all individuals is computed. The best achieved detection performance for 14 everyday human motion activities, using the USC-HAD database, was approximately 95%. The results compare favorably are competitive to the best reported performance of 93.1% for the same database.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A new approach to wavelet entropy: Application to postural signals.\n \n \n \n \n\n\n \n Franco, C.; Guméry, P.; Fleury, A.; and Vuillerme, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2320-2324, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952844,\n  author = {C. Franco and P. Guméry and A. Fleury and N. Vuillerme},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A new approach to wavelet entropy: Application to postural signals},\n  year = {2014},\n  pages = {2320-2324},\n  abstract = {This study proposes a new approach for quantifying complexity of physiological signals characterized by a spectral distribution in modes. Our approach is inspired by wavelet entropy but based on a modal representation: Synchrosqueezing transform. It is calculated for each time sample within the cone of influence of the decomposition. This index is first validated and discussed on simulated multicomponent signals. Finally, it is applied to assess postural control and ability at using all the sensory resources available. Results show significant differences in our index following an induced change in sensory conditions whereas a conventional approach fails. This index may constitute a promising tool for detection of postural troubles.},\n  keywords = {medical signal processing;wavelet transforms;wavelet entropy;postural signals;physiological signals;spectral distribution;synchrosqueezing transform;simulated multicomponent signals;sensory resources;postural troubles detection;Complexity theory;Indexes;Entropy;Time-frequency analysis;Continuous wavelet transforms;Visualization;Physiology;Synchrosqueezing transform;Wavelet entropy;Complexity;Postural signals;CoP data},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924579.pdf},\n}\n\n
\n
\n\n\n
\n This study proposes a new approach for quantifying complexity of physiological signals characterized by a spectral distribution in modes. Our approach is inspired by wavelet entropy but based on a modal representation: Synchrosqueezing transform. It is calculated for each time sample within the cone of influence of the decomposition. This index is first validated and discussed on simulated multicomponent signals. Finally, it is applied to assess postural control and ability at using all the sensory resources available. Results show significant differences in our index following an induced change in sensory conditions whereas a conventional approach fails. This index may constitute a promising tool for detection of postural troubles.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improved modeling and bounds for NQR spectroscopy signals.\n \n \n \n \n\n\n \n Kyriakidou, G.; Jakobsson, A.; Gudmundson, E.; Gregorovič, A.; Barras, J.; and Althoefer, K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2325-2329, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952845,\n  author = {G. Kyriakidou and A. Jakobsson and E. Gudmundson and A. Gregorovič and J. Barras and K. Althoefer},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Improved modeling and bounds for NQR spectroscopy signals},\n  year = {2014},\n  pages = {2325-2329},\n  abstract = {Nuclear Quadrupole Resonance (NQR) is a method of detection and unique characterization of compounds containing quadrupolar nuclei, commonly found in many forms of explosives, narcotics, and medicines. Typically, multi-pulse sequences are used to acquire the NQR signal, allowing the resulting signal to be well modeled as a sum of exponentially damped sinusoidal echoes. In this paper, we improve upon the earlier used NQR signal model, introducing an observed amplitude modulation of the spectral lines as a function of the sample temperature. This dependency noticeably affects the achievable identification performance in the typical case when the substance temperature is not perfectly known. We further extend the recently presented Cramér-Rao lower bound to the more detailed model, allowing one to determine suitable experimental conditions to optimize the detection and identifiability of the resulting signal. The theoretical results are carefully motivated using extensive NQR measurements.},\n  keywords = {explosives;nuclear quadrupole resonance;signal detection;NQR spectroscopy signals;nuclear quadrupole resonance;compounds detection;quadrupolar nuclei;explosives;narcotics;medicines;sinusoidal echoes;spectral lines;Cramér-Rao lower bound;Temperature measurement;Data models;Radio frequency;Frequency measurement;Temperature dependence;Resonant frequency;Uncertainty;Nuclear Quadrupole Resonance;temperature dependence;off-resonance effects;Cramér-Rao lower bound},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924619.pdf},\n}\n\n
\n
\n\n\n
\n Nuclear Quadrupole Resonance (NQR) is a method of detection and unique characterization of compounds containing quadrupolar nuclei, commonly found in many forms of explosives, narcotics, and medicines. Typically, multi-pulse sequences are used to acquire the NQR signal, allowing the resulting signal to be well modeled as a sum of exponentially damped sinusoidal echoes. In this paper, we improve upon the earlier used NQR signal model, introducing an observed amplitude modulation of the spectral lines as a function of the sample temperature. This dependency noticeably affects the achievable identification performance in the typical case when the substance temperature is not perfectly known. We further extend the recently presented Cramér-Rao lower bound to the more detailed model, allowing one to determine suitable experimental conditions to optimize the detection and identifiability of the resulting signal. The theoretical results are carefully motivated using extensive NQR measurements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Recursive total least-squares estimation of frequency in three-phase power systems.\n \n \n \n \n\n\n \n Arablouei, R.; Dogançay, K.; and Werner, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2330-2334, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"RecursivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952846,\n  author = {R. Arablouei and K. Dogançay and S. Werner},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Recursive total least-squares estimation of frequency in three-phase power systems},\n  year = {2014},\n  pages = {2330-2334},\n  abstract = {We propose an adaptive algorithm for estimating the frequency of a three-phase power system from its noisy voltage readings. We consider a second-order autoregressive linear predictive model for the noiseless complex-valued αβ signal of the system to relate the system frequency to the phase voltages. We use this model and the noisy voltage data to calculate a total least-square (TLS) estimate of the system frequency by employing the inverse power method in a recursive manner. Simulation results show that the proposed algorithm, called recursive TLS (RTLS), outperforms the recursive least-squares (RLS) and the bias-compensated RLS (BCRLS) algorithms in estimating the frequency of both balanced and unbalanced three-phase power systems. Unlike BCRLS, RTLS does not require the prior knowledge of the noise variance.},\n  keywords = {adaptive signal processing;autoregressive processes;frequency estimation;inverse problems;power grids;regression analysis;recursive total least-squares frequency estimation;noisy voltage readings;adaptive algorithm;second-order autoregressive linear predictive model;noiseless complex-valued αβ signal;phase voltages;inverse power method;recursive TLS;unbalanced three-phase power systems;balanced three-phase power systems;adaptive signal processing;electric power grids;Frequency estimation;Power systems;Steady-state;Noise measurement;Signal to noise ratio;Estimation;Adaptive signal processing;frequency estimation;inverse power method;linear predictive modeling;total least-squares},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569909319.pdf},\n}\n\n
\n
\n\n\n
\n We propose an adaptive algorithm for estimating the frequency of a three-phase power system from its noisy voltage readings. We consider a second-order autoregressive linear predictive model for the noiseless complex-valued αβ signal of the system to relate the system frequency to the phase voltages. We use this model and the noisy voltage data to calculate a total least-square (TLS) estimate of the system frequency by employing the inverse power method in a recursive manner. Simulation results show that the proposed algorithm, called recursive TLS (RTLS), outperforms the recursive least-squares (RLS) and the bias-compensated RLS (BCRLS) algorithms in estimating the frequency of both balanced and unbalanced three-phase power systems. Unlike BCRLS, RTLS does not require the prior knowledge of the noise variance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Advances in bacteria motility modelling via diffusion adaptation.\n \n \n \n \n\n\n \n Monajemi, S.; Sanei, S.; and Ong, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2335-2339, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AdvancesPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952847,\n  author = {S. Monajemi and S. Sanei and S. Ong},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Advances in bacteria motility modelling via diffusion adaptation},\n  year = {2014},\n  pages = {2335-2339},\n  abstract = {In this paper we model the biological networks of bacteria and antibacterial agents and investigate the effects of cooperation in the corresponding self-organized networks. The cooperative foraging of the bacteria has been used to solve non-gradient optimization problems. In order to obtain a more realistic model of the process, we extend the previously introduced strategies for bacteria motility to incorporate the effects of antibacterial agents and bacterial replication as two key aspects of this network. The proposed model provides a more accurate understanding of bacterial networks. Moreover, it has applications for various regenerative networks where the agents cooperate to solve an optimization problem. The model is examined and the effects of bacterial growth, diffusion of information and interaction of antibacterial agents with bacteria are evaluated.},\n  keywords = {cellular biophysics;diffusion;microorganisms;optimisation;bacteria motility;diffusion adaptation;biological networks;antibacterial agents;self-organized networks;cooperative foraging;nongradient optimization problems;bacterial replication;regenerative networks;bacterial growth;Microorganisms;Antibacterial activity;Adaptation models;Chemicals;Biological system modeling;Optimization;Sociology;Biological networks;bacterial motility;diffusion adaptation;optimization},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923085.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we model the biological networks of bacteria and antibacterial agents and investigate the effects of cooperation in the corresponding self-organized networks. The cooperative foraging of the bacteria has been used to solve non-gradient optimization problems. In order to obtain a more realistic model of the process, we extend the previously introduced strategies for bacteria motility to incorporate the effects of antibacterial agents and bacterial replication as two key aspects of this network. The proposed model provides a more accurate understanding of bacterial networks. Moreover, it has applications for various regenerative networks where the agents cooperate to solve an optimization problem. The model is examined and the effects of bacterial growth, diffusion of information and interaction of antibacterial agents with bacteria are evaluated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An alive electroencephalogram analysis system to assist the diagnosis of epilepsy.\n \n \n \n \n\n\n \n Ahmad, M. A.; Majeed, W.; and Khan, N. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2340-2344, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952848,\n  author = {M. A. Ahmad and W. Majeed and N. A. Khan},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {An alive electroencephalogram analysis system to assist the diagnosis of epilepsy},\n  year = {2014},\n  pages = {2340-2344},\n  abstract = {Computer assisted electroencephalograph analysis tools are trained to classify the data based upon the “ground truth” provided by the clinicians. After development and delivery of these systems there is no simple mechanism for these clinicians to improve the system's classification while encountering any false classification by the system. So the improvement process of the system's classification after initial training (during development) can be termed as `dead'. We consider neurologist as the best available benchmark for system's learning. In this article, we propose an `alive' system, capable of improving its performance by taking clinician's feedback into consideration. The system is based on taking DWT transform which has been shown to be very effective for EEG signal analysis. PCA is applied on the statistical features which are extracted from DWT coefficients before classification by an SVM classifier. After corrective marking of few epochs the initial average accuracy of 94.8% raised to 95.12.},\n  keywords = {discrete wavelet transforms;electroencephalography;medical signal processing;patient diagnosis;principal component analysis;signal classification;support vector machines;alive electroencephalogram analysis system;epilepsy diagnosis;computer assisted electroencephalograph analysis tools;ground truth;false classification;neurologist;alive system;clinician feedback;DWT transform;EEG signal analysis;PCA;statistical features;DWT coefficients;SVM classifier;Electroencephalography;Feature extraction;Training;Epilepsy;Discrete wavelet transforms;Support vector machines;Accuracy;Electroencephalography (EEG);Epilepsy;Computer Assisted Analysis;Machine Learning;Biomedical Signal Processing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926707.pdf},\n}\n\n
\n
\n\n\n
\n Computer assisted electroencephalograph analysis tools are trained to classify the data based upon the “ground truth” provided by the clinicians. After development and delivery of these systems there is no simple mechanism for these clinicians to improve the system's classification while encountering any false classification by the system. So the improvement process of the system's classification after initial training (during development) can be termed as `dead'. We consider neurologist as the best available benchmark for system's learning. In this article, we propose an `alive' system, capable of improving its performance by taking clinician's feedback into consideration. The system is based on taking DWT transform which has been shown to be very effective for EEG signal analysis. PCA is applied on the statistical features which are extracted from DWT coefficients before classification by an SVM classifier. After corrective marking of few epochs the initial average accuracy of 94.8% raised to 95.12.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Network observability and localization of sources of diffusion in tree networks with missing edges.\n \n \n \n \n\n\n \n Zejnilović, S.; Gomes, J.; and Sinopoli, B.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2345-2349, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"NetworkPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952849,\n  author = {S. Zejnilović and J. Gomes and B. Sinopoli},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Network observability and localization of sources of diffusion in tree networks with missing edges},\n  year = {2014},\n  pages = {2345-2349},\n  abstract = {In order to quickly curb infections or prevent spreading of rumors, first the source of diffusion needs to be localized. We analyze the problem of source localization, based on infection times of a subset of nodes in incompletely observed tree networks, under a simple propagation model. Our scenario reflects the assumption that having access to all the nodes and full network topology is often not feasible. We evaluate the number of possible topologies that are consistent with the observed incomplete tree. We provide a sufficient condition for the selection of observed nodes, such that correct localization is possible, i.e. the network is observable. Finally, we formulate the source localization problem under these assumptions as a binary linear integer program. We then provide a small simulation example to illustrate the effect of the number of observed nodes on the problem complexity and on the number of possible solutions for the source.},\n  keywords = {integer programming;linear programming;network theory (graphs);trees (mathematics);diffusion source localization;network observability;tree network;simple propagation model;network topology;observed node selection;binary linear integer program;Vegetation;Observers;Topology;Network topology;Observability;Optimization;Servers;Network observability;source localization;tree graphs;missing edges},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925611.pdf},\n}\n\n
\n
\n\n\n
\n In order to quickly curb infections or prevent spreading of rumors, first the source of diffusion needs to be localized. We analyze the problem of source localization, based on infection times of a subset of nodes in incompletely observed tree networks, under a simple propagation model. Our scenario reflects the assumption that having access to all the nodes and full network topology is often not feasible. We evaluate the number of possible topologies that are consistent with the observed incomplete tree. We provide a sufficient condition for the selection of observed nodes, such that correct localization is possible, i.e. the network is observable. Finally, we formulate the source localization problem under these assumptions as a binary linear integer program. We then provide a small simulation example to illustrate the effect of the number of observed nodes on the problem complexity and on the number of possible solutions for the source.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Graph Empirical Mode Decomposition.\n \n \n \n \n\n\n \n Tremblay, N.; Borgnat, P.; and Flandrin, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2350-2354, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"GraphPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952850,\n  author = {N. Tremblay and P. Borgnat and P. Flandrin},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Graph Empirical Mode Decomposition},\n  year = {2014},\n  pages = {2350-2354},\n  abstract = {An extension of Empirical Mode Decomposition (EMD) is defined for graph signals. EMD is an algorithm that decomposes a signal in an addition of modes, in a local and data-driven manner. The proposed Graph EMD (GEMD) for graph signals is based on careful considerations on key points of EMD: defining the extrema, interpolation procedure, and the sifting process stopping criterion. Examples of GEMD are shown on the 2D grid and on two examples of sensor networks. Finally the effect of the graph's connectivity on the algorithm's performance is discussed.},\n  keywords = {graph theory;interpolation;signal processing;graph empirical mode decomposition;graph signals;signal decomposition;graph EMD;GEMD;interpolation procedure;sifting process;sensor networks;Interpolation;Three-dimensional displays;Empirical mode decomposition;Chirp;Manifolds;Signal processing algorithms;Graph signal processing;Empirical Mode Decomposition;Graph interpolation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922141.pdf},\n}\n\n
\n
\n\n\n
\n An extension of Empirical Mode Decomposition (EMD) is defined for graph signals. EMD is an algorithm that decomposes a signal in an addition of modes, in a local and data-driven manner. The proposed Graph EMD (GEMD) for graph signals is based on careful considerations on key points of EMD: defining the extrema, interpolation procedure, and the sifting process stopping criterion. Examples of GEMD are shown on the 2D grid and on two examples of sensor networks. Finally the effect of the graph's connectivity on the algorithm's performance is discussed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Consensus for continuous belief functions.\n \n \n \n \n\n\n \n Weng, Z.; and Djurić, P. M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2355-2359, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ConsensusPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952851,\n  author = {Z. Weng and P. M. Djurić},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Consensus for continuous belief functions},\n  year = {2014},\n  pages = {2355-2359},\n  abstract = {We study the belief consensus problem in networks of agents. Unlike previous work in the literature, where agents try to reach consensus on a scalar or vector, here we investigate how agents can reach a consensus on a continuous probability distribution. In our setting, the agents fuse functions instead of point estimates. The objective is that every agent ends up with the belief being the global Bayesian posterior. We show that to achieve the objective, the agents need to know the number of total agents in the network. In many scenarios, this number is not available and therefore the global Bayesian posterior is not achievable. In such cases, we have to resort to approximation methods. We confine ourselves to Gaussian cases and formulate the optimization problem for them. Then we propose two methods for the selection of weighting coefficients used for combining information from neighbors in the fusion process. We also provide results of simulation that demonstrate the performance of the methods.},\n  keywords = {approximation theory;Bayes methods;multi-agent systems;probability;continuous belief functions;consensus problem;agent networks;continuous probability distribution;global Bayesian posterior;approximation methods;weighting coefficients;fusion process;Measurement;Probability distribution;Optimization;Network topology;Topology;Covariance matrices;Signal processing algorithms;Agent networks;belief consensus;Covariance Intersection;fusion of probability distributions},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924835.pdf},\n}\n\n
\n
\n\n\n
\n We study the belief consensus problem in networks of agents. Unlike previous work in the literature, where agents try to reach consensus on a scalar or vector, here we investigate how agents can reach a consensus on a continuous probability distribution. In our setting, the agents fuse functions instead of point estimates. The objective is that every agent ends up with the belief being the global Bayesian posterior. We show that to achieve the objective, the agents need to know the number of total agents in the network. In many scenarios, this number is not available and therefore the global Bayesian posterior is not achievable. In such cases, we have to resort to approximation methods. We confine ourselves to Gaussian cases and formulate the optimization problem for them. Then we propose two methods for the selection of weighting coefficients used for combining information from neighbors in the fusion process. We also provide results of simulation that demonstrate the performance of the methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed reduced-rank estimation based on joint iterative optimization in sensor networks.\n \n \n \n \n\n\n \n Xu, S.; de Lamare , R. C.; and Poor, H. V.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2360-2364, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DistributedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952852,\n  author = {S. Xu and R. C. {de Lamare} and H. V. Poor},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Distributed reduced-rank estimation based on joint iterative optimization in sensor networks},\n  year = {2014},\n  pages = {2360-2364},\n  abstract = {This paper proposes a novel distributed reduced-rank scheme and an adaptive algorithm for distributed estimation in wireless sensor networks. The proposed distributed scheme is based on a transformation that performs dimensionality reduction at each agent of the network followed by a reduced-dimension parameter vector. A distributed reduced-rank joint iterative estimation algorithm is developed, which has the ability to achieve significantly reduced communication overhead and improved performance when compared with existing techniques. Simulation results illustrate the advantages of the proposed strategy in terms of convergence rate and mean square error performance.},\n  keywords = {adaptive signal processing;iterative methods;wireless sensor networks;distributed reduced rank joint iterative estimation algorithm;reduced dimension parameter vector;dimensionality reduction;wireless sensor network;adaptive algorithm;joint iterative optimization;distributed reduced rank estimation;Estimation;Optimization;Signal processing algorithms;Vectors;Convergence;Joints;Wireless sensor networks;Dimensionality reduction;distributed estimation;reduced-rank methods;wireless sensor networks},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569920951.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a novel distributed reduced-rank scheme and an adaptive algorithm for distributed estimation in wireless sensor networks. The proposed distributed scheme is based on a transformation that performs dimensionality reduction at each agent of the network followed by a reduced-dimension parameter vector. A distributed reduced-rank joint iterative estimation algorithm is developed, which has the ability to achieve significantly reduced communication overhead and improved performance when compared with existing techniques. Simulation results illustrate the advantages of the proposed strategy in terms of convergence rate and mean square error performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On sequential estimation of linear models from data with correlated noise.\n \n \n \n \n\n\n \n Wang, Y.; and Djurić, P. M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2365-2369, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952873,\n  author = {Y. Wang and P. M. Djurić},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {On sequential estimation of linear models from data with correlated noise},\n  year = {2014},\n  pages = {2365-2369},\n  abstract = {In this paper, we consider the problem of Bayesian sequential estimation on a set of time invariant parameters. At every time instant, a new observation through a linear model is obtained where the observations are distorted by spatially correlated noise with unknown covariance, whereas in time, the noise samples are independent and identically distributed. We derive the joint posterior of the parameters of interest and the covariance, and we propose several approximations to make the Bayesian estimation tractable. Then we propose a method for forming a pseudo posterior, which is suitable for settings where estimation over networks is applied. By computer simulations, we demonstrate that the Kullback-Leibler divergence between the pseudo posterior and a posterior obtained from a known covariance decreases as the acquisition of new observations continues. We also provide computer simulations that compare the proposed method with the least squares method.},\n  keywords = {approximation theory;belief networks;signal processing;Bayesian sequential estimation;linear model;unknown covariance;pseudo posterior;Kullback-Leibler divergence;correlated noise;Estimation;Noise;Bayes methods;Least squares approximations;Sensors;Computational modeling;Bayesian inference;distributed estimation;unknown covariance;pseudo posterior},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926831.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we consider the problem of Bayesian sequential estimation on a set of time invariant parameters. At every time instant, a new observation through a linear model is obtained where the observations are distorted by spatially correlated noise with unknown covariance, whereas in time, the noise samples are independent and identically distributed. We derive the joint posterior of the parameters of interest and the covariance, and we propose several approximations to make the Bayesian estimation tractable. Then we propose a method for forming a pseudo posterior, which is suitable for settings where estimation over networks is applied. By computer simulations, we demonstrate that the Kullback-Leibler divergence between the pseudo posterior and a posterior obtained from a known covariance decreases as the acquisition of new observations continues. We also provide computer simulations that compare the proposed method with the least squares method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Model-based processing for acoustic scene analysis.\n \n \n \n \n\n\n \n Nadeu, C.; Chakraborty, R.; and Wolf, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2370-2374, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Model-basedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952874,\n  author = {C. Nadeu and R. Chakraborty and M. Wolf},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Model-based processing for acoustic scene analysis},\n  year = {2014},\n  pages = {2370-2374},\n  abstract = {The analysis of acoustic scenes requires several functionalities, being perhaps recognition (speech, speaker, other acoustic events) and spatial localization the two most relevant ones. For a reduced invasiveness, the microphones are far away from the sound sources, and possibly grouped in arrays, which may be distributed, not arranged, in the room. Aiming at an increased performance, the usual model-based approach employed for sound recognition or detection can be extended to other co-occurrent tasks like source localization, so both tasks can be carried out jointly, using the same formulation and processing. In this paper, we intend to illustrate that point by presenting together a few new model-based techniques that deal with the problems of overlapped-sounds recognition, multi-source localization, and channel selection. They are briefly described, and tested in a smart-room environment with a multiple microphone-array setup.},\n  keywords = {acoustic signal detection;acoustic signal processing;microphone arrays;signal classification;microphone-array setup;smart-room environment;channel selection;multisource localization;overlapped-sounds recognition;model-based techniques;sound detection;sound sources;spatial localization;acoustic scenes analysis;Acoustics;Microphone arrays;Speech recognition;Speech;Array signal processing;Computational modeling;Acoustic scene analysis;audio recognition;acoustic source localization;channel selection;multi-microphone processing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925565.pdf},\n}\n\n
\n
\n\n\n
\n The analysis of acoustic scenes requires several functionalities, being perhaps recognition (speech, speaker, other acoustic events) and spatial localization the two most relevant ones. For a reduced invasiveness, the microphones are far away from the sound sources, and possibly grouped in arrays, which may be distributed, not arranged, in the room. Aiming at an increased performance, the usual model-based approach employed for sound recognition or detection can be extended to other co-occurrent tasks like source localization, so both tasks can be carried out jointly, using the same formulation and processing. In this paper, we intend to illustrate that point by presenting together a few new model-based techniques that deal with the problems of overlapped-sounds recognition, multi-source localization, and channel selection. They are briefly described, and tested in a smart-room environment with a multiple microphone-array setup.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-microphone fusion for detection of speech and acoustic events in smart spaces.\n \n \n \n \n\n\n \n Giannoulis, P.; Potamianos, G.; Katsamanis, A.; and Maragos, P.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2375-2379, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-microphonePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952875,\n  author = {P. Giannoulis and G. Potamianos and A. Katsamanis and P. Maragos},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-microphone fusion for detection of speech and acoustic events in smart spaces},\n  year = {2014},\n  pages = {2375-2379},\n  abstract = {In this paper, we examine the challenging problem of detecting acoustic events and voice activity in smart indoors environments, equipped with multiple microphones. In particular, we focus on channel combination strategies, aiming to take advantage of the multiple microphones installed in the smart space, capturing the potentially noisy acoustic scene from the far-field. We propose various such approaches that can be formulated as fusion at the signal, feature, or at the decision level, as well as combinations of the above, also including multi-channel training. We apply our methods on two multi-microphone databases: (a) one recorded inside a small meeting room, containing twelve classes of isolated acoustic events; and (b) a speech corpus containing interfering noise sources, simulated inside a smart home with multiple rooms. Our multi-channel approaches demonstrate significant improvements, reaching relative error reductions over a single-channel baseline of 9.3% and 44.8% in the two datasets, respectively.},\n  keywords = {acoustic signal detection;home computing;microphones;speech processing;relative error reductions;smart home;multimicrophone databases;noisy acoustic scene;smart indoors environments;acoustic event detection;speech detection;multimicrophone fusion;Hidden Markov models;Acoustics;Channel estimation;Speech;Microphones;Training;Event detection;acoustic event detection and classification;voice activity detection;multi-channel fusion},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925285.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we examine the challenging problem of detecting acoustic events and voice activity in smart indoors environments, equipped with multiple microphones. In particular, we focus on channel combination strategies, aiming to take advantage of the multiple microphones installed in the smart space, capturing the potentially noisy acoustic scene from the far-field. We propose various such approaches that can be formulated as fusion at the signal, feature, or at the decision level, as well as combinations of the above, also including multi-channel training. We apply our methods on two multi-microphone databases: (a) one recorded inside a small meeting room, containing twelve classes of isolated acoustic events; and (b) a speech corpus containing interfering noise sources, simulated inside a smart home with multiple rooms. Our multi-channel approaches demonstrate significant improvements, reaching relative error reductions over a single-channel baseline of 9.3% and 44.8% in the two datasets, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distant speech recognition in reverberant noisy conditions employing a microphone array.\n \n \n \n \n\n\n \n Morales-Cordovilla, J. A.; Hagmüller, M.; Pessentheiner, H.; and Kubin, G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2380-2384, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DistantPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952876,\n  author = {J. A. Morales-Cordovilla and M. Hagmüller and H. Pessentheiner and G. Kubin},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Distant speech recognition in reverberant noisy conditions employing a microphone array},\n  year = {2014},\n  pages = {2380-2384},\n  abstract = {This paper addresses the problem of distant speech recognition in reverberant noisy conditions employing a microphone array. We present a prototype system that can segment the utterances in real-time and generate robust ASR results off-line. The segmentation is carried out by a voice activity detector based on deep belief networks, the speaker localization by a position-pitch plane, and the enhancement by a novel combination of convex optimized beamforming and vector Taylor series compensation. All of the components are compared with other similar ones and justified in terms of word accuracy on a proposed database which simulates distant speech recognition in a home environment.},\n  keywords = {array signal processing;belief networks;convex programming;microphone arrays;signal detection;speaker recognition;speech enhancement;home environment;speech enhancement;vector Taylor series compensation;convex optimized beamforming;position-pitch plane;speaker localization;deep belief networks;voice activity detector;robust ASR;microphone array;reverberant noisy conditions;distant speech recognition;Speech recognition;Noise;Microphones;Speech;Arrays;Vectors;Accuracy;distant speech recognition;deep belief network voice activity detection;PoPi speaker localization;convexoptimized beamforming;vector Taylor series compensation;reverberant and noisy environment;natural mixing;German database},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925567.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the problem of distant speech recognition in reverberant noisy conditions employing a microphone array. We present a prototype system that can segment the utterances in real-time and generate robust ASR results off-line. The segmentation is carried out by a voice activity detector based on deep belief networks, the speaker localization by a position-pitch plane, and the enhancement by a novel combination of convex optimized beamforming and vector Taylor series compensation. All of the components are compared with other similar ones and justified in terms of word accuracy on a proposed database which simulates distant speech recognition in a home environment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Exploiting inter-microphone agreement for hypothesis combination in distant speech recognition.\n \n \n \n \n\n\n \n Guerrero, C.; and Omologo, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2385-2389, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ExploitingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952877,\n  author = {C. Guerrero and M. Omologo},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Exploiting inter-microphone agreement for hypothesis combination in distant speech recognition},\n  year = {2014},\n  pages = {2385-2389},\n  abstract = {A multi-microphone hypothesis combination approach, suitable for the distant-talking scenario, is presented in this paper. The method is based on the inter-microphone agreement of information, extracted at speech recognition level. Particularly, temporal information is exploited to organize the clusters that shape the resulting confusion network, and to reduce the global hypothesis search space. As a result, a single combined confusion network is generated from multiple lattices. The approach offers a novel perspective to solutions based on confusion network combination. The method was evaluated in a simulated domestic environment equipped with largely spaced microphones. The experimental evidence suggests that results, comparable or, in some cases, better than the state of the art, can be achieved under optimal configurations with the proposed method.},\n  keywords = {microphones;speech recognition;intermicrophone agreement;distant speech recognition;multimicrophone hypothesis combination approach;Lattices;Microphones;Speech recognition;Microwave integrated circuits;Computer numerical control;Speech;Decoding;Distant speech recognition;hypothesis combination;multi-microphone;confusion networks},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925201.pdf},\n}\n\n
\n
\n\n\n
\n A multi-microphone hypothesis combination approach, suitable for the distant-talking scenario, is presented in this paper. The method is based on the inter-microphone agreement of information, extracted at speech recognition level. Particularly, temporal information is exploited to organize the clusters that shape the resulting confusion network, and to reduce the global hypothesis search space. As a result, a single combined confusion network is generated from multiple lattices. The approach offers a novel perspective to solutions based on confusion network combination. The method was evaluated in a simulated domestic environment equipped with largely spaced microphones. The experimental evidence suggests that results, comparable or, in some cases, better than the state of the art, can be achieved under optimal configurations with the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Experiments in acoustic source localization using sparse arrays in adverse indoors environments.\n \n \n \n \n\n\n \n Tsiami, A.; Katsamanis, A.; Maragos, P.; and Potamianos, G.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2390-2394, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ExperimentsPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952878,\n  author = {A. Tsiami and A. Katsamanis and P. Maragos and G. Potamianos},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Experiments in acoustic source localization using sparse arrays in adverse indoors environments},\n  year = {2014},\n  pages = {2390-2394},\n  abstract = {In this paper we experiment with 2-D source localization in smart homes under adverse conditions using sparse distributed microphone arrays. We propose some improvements to deal with problems due to high reverberation, noise and use of a limited number of microphones. These consist of a pre-filtering stage for dereverberation and an iterative procedure that aims to increase accuracy. Experiments carried out in relatively large databases with both simulated and real recordings of sources in various positions indicate that the proposed method exhibits a better performance compared to others under challenging conditions while also being computationally efficient. It is demonstrated that although reverberation degrades localization performance, this degradation can be compensated by identifying the reliable microphone pairs and disposing of the outliers.},\n  keywords = {acoustic signal processing;array signal processing;filtering theory;iterative methods;microphone arrays;reverberation;source separation;acoustic source localization;adverse indoors environments;2D source localization;smart homes;sparse distributed microphone arrays;high reverberation;pre-filtering stage;dereverberation;iterative procedure;reliable microphone pairs;Estimation;Reverberation;Speech;Direction-of-arrival estimation;Microphone arrays;Databases;source localization;reverberation;outlier elimination;sparse arrays},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925293.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we experiment with 2-D source localization in smart homes under adverse conditions using sparse distributed microphone arrays. We propose some improvements to deal with problems due to high reverberation, noise and use of a limited number of microphones. These consist of a pre-filtering stage for dereverberation and an iterative procedure that aims to increase accuracy. Experiments carried out in relatively large databases with both simulated and real recordings of sources in various positions indicate that the proposed method exhibits a better performance compared to others under challenging conditions while also being computationally efficient. It is demonstrated that although reverberation degrades localization performance, this degradation can be compensated by identifying the reliable microphone pairs and disposing of the outliers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Exploring deep Markov models in genomic data compression using sequence pre-analysis.\n \n \n \n \n\n\n \n Pratas, D.; and Pinho, A. J.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2395-2399, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ExploringPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952879,\n  author = {D. Pratas and A. J. Pinho},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Exploring deep Markov models in genomic data compression using sequence pre-analysis},\n  year = {2014},\n  pages = {2395-2399},\n  abstract = {The pressure to find efficient genomic compression algorithms is being felt worldwide, as proved by several prizes and competitions. In this paper, we propose a compression algorithm that relies on a pre-analysis of the data before compression, with the aim of identifying regions of low complexity. This strategy enables us to use deeper context models, supported by hash-tables, without requiring huge amounts of memory. As an example, context depths as large as 32 are attainable for alphabets of four symbols, as is the case of genomic sequences. These deeper context models show very high compression capabilities in very repetitive genomic sequences, yielding improvements over previous algorithms. Furthermore, this method is universal, in the sense that it can be used in any type of textual data (such as quality-scores).},\n  keywords = {biology computing;data analysis;data compression;genomics;Markov processes;deep Markov models;genomic data compression algorithm;data sequence pre-analysis;low complexity regions;hash-tables;repetitive genomic sequences;textual data;Bioinformatics;Genomics;DNA;Context;Data compression;Context modeling;Data models;Genomic data compression;hash-tables;finite-context models},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926675.pdf},\n}\n\n
\n
\n\n\n
\n The pressure to find efficient genomic compression algorithms is being felt worldwide, as proved by several prizes and competitions. In this paper, we propose a compression algorithm that relies on a pre-analysis of the data before compression, with the aim of identifying regions of low complexity. This strategy enables us to use deeper context models, supported by hash-tables, without requiring huge amounts of memory. As an example, context depths as large as 32 are attainable for alphabets of four symbols, as is the case of genomic sequences. These deeper context models show very high compression capabilities in very repetitive genomic sequences, yielding improvements over previous algorithms. Furthermore, this method is universal, in the sense that it can be used in any type of textual data (such as quality-scores).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perfect periodic sequences for Legendre nonlinear filters.\n \n \n \n \n\n\n \n Carini, A.; Cecchi, S.; Romoli, L.; and Sicuranza, G. L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2400-2404, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PerfectPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952880,\n  author = {A. Carini and S. Cecchi and L. Romoli and G. L. Sicuranza},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Perfect periodic sequences for Legendre nonlinear filters},\n  year = {2014},\n  pages = {2400-2404},\n  abstract = {The paper shows that perfect periodic sequences can be developed and used for the identification of Legendre nonlinear filters, a sub-class of linear-in-the-parameters nonlinear filters recently introduced in the literature. A periodic sequence is perfect for the identification of a nonlinear filter if all cross-correlations between two different basis functions, estimated over a period, are zero. Using perfect periodic sequences as input signals, the unknown nonlinear system and its most relevant basis functions can be identified with the cross-correlation method. The effectiveness and efficiency of this approach is illustrated with experimental results involving a real nonlinear system.},\n  keywords = {nonlinear filters;perfect periodic sequences;nonlinear system;Legendre nonlinear filters;Nonlinear systems;Polynomials;Mathematical model;Newton method;Indexes;Computational modeling;Nonlinear system identification;linear-in-the-parameters nonlinear filters;Legendre nonlinear filters;perfect periodic sequences;cross-correlation method},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917457.pdf},\n}\n\n
\n
\n\n\n
\n The paper shows that perfect periodic sequences can be developed and used for the identification of Legendre nonlinear filters, a sub-class of linear-in-the-parameters nonlinear filters recently introduced in the literature. A periodic sequence is perfect for the identification of a nonlinear filter if all cross-correlations between two different basis functions, estimated over a period, are zero. Using perfect periodic sequences as input signals, the unknown nonlinear system and its most relevant basis functions can be identified with the cross-correlation method. The effectiveness and efficiency of this approach is illustrated with experimental results involving a real nonlinear system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A multidimensional approach to Wave Digital Filters with multiple nonlinearities.\n \n \n \n \n\n\n \n Schwerdtfeger, T.; and Kummert, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2405-2409, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952881,\n  author = {T. Schwerdtfeger and A. Kummert},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A multidimensional approach to Wave Digital Filters with multiple nonlinearities},\n  year = {2014},\n  pages = {2405-2409},\n  abstract = {The implementation of nonlinear elements in Wave Digital Filters (WDFs) is usually restricted to just one nonlinear one-port per structure. Existing approaches that aim to circumvent this restriction have in common that they neglect the notion of modularity and thus the reusability of the original Wave Digital concept. In this paper, a new modular approach to implement an arbitrary number of nonlinearities based on Multidimensional Wave Digital Filters (MDWDFs) is presented. For this, the contractivity property of WDFs is shown. On that basis, the new approach is studied with respect to possible side-effects and an appropriate modification is proposed that counteracts these effects and significantly improves the convergence behaviour.},\n  keywords = {wave digital filters;multidimensional wave digital filters;MDWDF;multiple nonlinearities;nonlinear elements;modular approach;arbitrary number;contractivity property;Delays;Digital filters;Convergence;Ports (Computers);Prototypes;Mathematical model;Integrated circuit modeling;Wave Digital Filters;Multidimensional;Contractivity;Multiple Nonlinearities;Analog Modeling},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923213.pdf},\n}\n\n
\n
\n\n\n
\n The implementation of nonlinear elements in Wave Digital Filters (WDFs) is usually restricted to just one nonlinear one-port per structure. Existing approaches that aim to circumvent this restriction have in common that they neglect the notion of modularity and thus the reusability of the original Wave Digital concept. In this paper, a new modular approach to implement an arbitrary number of nonlinearities based on Multidimensional Wave Digital Filters (MDWDFs) is presented. For this, the contractivity property of WDFs is shown. On that basis, the new approach is studied with respect to possible side-effects and an appropriate modification is proposed that counteracts these effects and significantly improves the convergence behaviour.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Fast filter in non-linear systems with application to stochastic volatility model.\n \n \n \n\n\n \n Derrode, S.; and Pieczynski, W.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2410-2414, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952882,\n  author = {S. Derrode and W. Pieczynski},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Fast filter in non-linear systems with application to stochastic volatility model},\n  year = {2014},\n  pages = {2410-2414},\n  abstract = {We consider the problem of optimal statistical filtering in nonlinear and non-Gaussian systems. The novelty consists of approximating the non-linear system by a recent switching system, in which exact fast optimal filtering is workable. The new method is applied to filter stochastic volatility model and some experiments show its efficiency.},\n  keywords = {filtering theory;statistical analysis;stochastic processes;nonlinear system;optimal statistical filtering;nonGaussian system;switching system;fast optimal filtering;filter stochastic volatility model;Markov processes;Switches;Abstracts;Zinc;Non-linear systems;Stochastic volatility model;Optimal statistical filter;Conditionally Gaussian linear state-space model;Conditionally Markov switching hidden linear model;Filtering in switching systems},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n We consider the problem of optimal statistical filtering in nonlinear and non-Gaussian systems. The novelty consists of approximating the non-linear system by a recent switching system, in which exact fast optimal filtering is workable. The new method is applied to filter stochastic volatility model and some experiments show its efficiency.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Generalization of Campbell's theorem to nonstationary noise.\n \n \n \n \n\n\n \n Cohen, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2415-2419, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"GeneralizationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952883,\n  author = {L. Cohen},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Generalization of Campbell's theorem to nonstationary noise},\n  year = {2014},\n  pages = {2415-2419},\n  abstract = {Campbell's theorem is a fundamental result in noise theory and is applied in many fields of science and engineering. It gives a simple but very powerful expression for the mean and standard deviation of a stationary random pulse train. We generalize Campbell's theorem to the non-stationary case where the random process is space and time dependent. We also generalize it to a pulse train of waves, acoustic and electromagnetic, where the intensity is defined as the absolute square of the pulse train.},\n  keywords = {noise;signal processing;Campbell theorem;nonstationary noise;noise theory;science;engineering;random pulse train;Noise;Standards;Reverberation;Random processes;Time-frequency analysis;System-on-chip;Correlation;nonstationary noise;Campbell's theorem;random pulse train;reverberation;time-frequency},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925969.pdf},\n}\n\n
\n
\n\n\n
\n Campbell's theorem is a fundamental result in noise theory and is applied in many fields of science and engineering. It gives a simple but very powerful expression for the mean and standard deviation of a stationary random pulse train. We generalize Campbell's theorem to the non-stationary case where the random process is space and time dependent. We also generalize it to a pulse train of waves, acoustic and electromagnetic, where the intensity is defined as the absolute square of the pulse train.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi temporal distance images for shot detection in soccer games.\n \n \n \n \n\n\n \n Hoernig, M.; Herrmann, M.; and Radig, B.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2420-2424, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"MultiPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952884,\n  author = {M. Hoernig and M. Herrmann and B. Radig},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Multi temporal distance images for shot detection in soccer games},\n  year = {2014},\n  pages = {2420-2424},\n  abstract = {We present a new approach for video shot detection and introduce multi temporal distance images (MTDIs), formed by chisquare based similarity measures that are calculated pairwise within a floating window of video frames. By using MTDI-based boundary detectors, various cuts and transitions in various shapes (dissolves, overlayed effects, fades, and others) can be determined. The algorithm has been developed within the special context of soccer game TV broadcasts, where a particular interest in long view shots is intrinsic. With a correct shot detection rate in camera 1 shots of 98.2% within our representative test data set, our system outperforms competing state-of-the-art systems.},\n  keywords = {object detection;sport;video signal processing;multitemporal distance images;MTDI;chisquare based similarity measures;video shot detection;floating window;video frames;boundary detectors;soccer game TV broadcasts;Cameras;Detectors;Shape;Histograms;Games;Symmetric matrices;Robustness;soccer video analysis;video indexing;multi temporal distance image (MTDI);video segmentation;video shot boundary detection},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923481.pdf},\n}\n\n
\n
\n\n\n
\n We present a new approach for video shot detection and introduce multi temporal distance images (MTDIs), formed by chisquare based similarity measures that are calculated pairwise within a floating window of video frames. By using MTDI-based boundary detectors, various cuts and transitions in various shapes (dissolves, overlayed effects, fades, and others) can be determined. The algorithm has been developed within the special context of soccer game TV broadcasts, where a particular interest in long view shots is intrinsic. With a correct shot detection rate in camera 1 shots of 98.2% within our representative test data set, our system outperforms competing state-of-the-art systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Steganalysis with cover-source mismatch and a small learning database.\n \n \n \n \n\n\n \n Pasquet, J.; Bringay, S.; and Chaumont, M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2425-2429, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"SteganalysisPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952885,\n  author = {J. Pasquet and S. Bringay and M. Chaumont},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Steganalysis with cover-source mismatch and a small learning database},\n  year = {2014},\n  pages = {2425-2429},\n  abstract = {Many different hypotheses may be chosen for modeling a steganography/steganalysis problem. In this paper, we look closer into the case in which Eve, the steganalyst, has partial or erroneous knowledge of the cover distribution. More precisely we suppose that Eve knows the algorithms and the payload size that has been used by Alice, the steganographer, but she ignores the images distribution. In this source-cover mismatch scenario, we demonstrate that an Ensemble Classifier with Features Selection (EC-FS) allows the steganalyst to obtain the best state-of-the-art performances, while requiring 100 times smaller training database compared to the previous state-of-the art approach. Moreover, we propose the islet approach in order to increase the classification performances.},\n  keywords = {database management systems;learning (artificial intelligence);pattern classification;steganography;steganalysis;cover-source mismatch;small learning database;steganography;Eve;cover distribution;images distribution;ensemble classifier with features selection;EC-FS;Databases;Vectors;Training;Support vector machine classification;Security;Forensics;Complexity theory;Steganalysis;Cover-Source Mismatch;Ensemble Classifiers with Post-Selection of Features;Ensemble Average Perceptron;Clustering},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569913723.pdf},\n}\n\n
\n
\n\n\n
\n Many different hypotheses may be chosen for modeling a steganography/steganalysis problem. In this paper, we look closer into the case in which Eve, the steganalyst, has partial or erroneous knowledge of the cover distribution. More precisely we suppose that Eve knows the algorithms and the payload size that has been used by Alice, the steganographer, but she ignores the images distribution. In this source-cover mismatch scenario, we demonstrate that an Ensemble Classifier with Features Selection (EC-FS) allows the steganalyst to obtain the best state-of-the-art performances, while requiring 100 times smaller training database compared to the previous state-of-the art approach. Moreover, we propose the islet approach in order to increase the classification performances.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Universal image steganalysis based on GARCH model.\n \n \n \n \n\n\n \n Akhavan, S.; Akhaee, M. A.; and Sarreshtedari, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2430-2434, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"UniversalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952886,\n  author = {S. Akhavan and M. A. Akhaee and S. Sarreshtedari},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Universal image steganalysis based on GARCH model},\n  year = {2014},\n  pages = {2430-2434},\n  abstract = {This paper introduces a new universal steganalysis framework. The required image features are extracted based on the generalized autoregressive conditional heteroskedasticity (GARCH) model and higher-order statistics of the images. The GARCH features are extracted from non-approximate wavelet coefficients. Besides, the second and third order statistics are exploited to develop features very sensitive to minor changes in natural images. The experimental results demonstrate that the proposed feature-based steganalysis framework outperforms state of the art methods while running on the same order of features.},\n  keywords = {autoregressive processes;feature extraction;higher order statistics;image processing;steganography;feature extraction;higher order statistics;generalized autoregressive conditional heteroskedasticity model;image features;GARCH model;universal image steganalysis;Feature extraction;Discrete cosine transforms;Correlation;Wavelet transforms;Higher order statistics;Training;GARCH Model;Steganalysis;Higher Order Statistics},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925101.pdf},\n}\n\n
\n
\n\n\n
\n This paper introduces a new universal steganalysis framework. The required image features are extracted based on the generalized autoregressive conditional heteroskedasticity (GARCH) model and higher-order statistics of the images. The GARCH features are extracted from non-approximate wavelet coefficients. Besides, the second and third order statistics are exploited to develop features very sensitive to minor changes in natural images. The experimental results demonstrate that the proposed feature-based steganalysis framework outperforms state of the art methods while running on the same order of features.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A frontal view gait recognition based on 3D imaging using a time of flight camera.\n \n \n \n \n\n\n \n Afendi, T.; Kurugollu, F.; Crookes, D.; and Bouridane, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2435-2439, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952887,\n  author = {T. Afendi and F. Kurugollu and D. Crookes and A. Bouridane},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A frontal view gait recognition based on 3D imaging using a time of flight camera},\n  year = {2014},\n  pages = {2435-2439},\n  abstract = {Studies have been carried out to recognize individuals from a frontal view using their gait patterns. In previous work, gait sequences were captured using either single or stereo RGB camera systems or the Kinect 1.0 camera system. In this research, we used a new frontal view gait recognition method using a laser based Time of Flight (ToF) camera. In addition to the new gait data set, other contributions include enhancement of the silhouette segmentation, gait cycle estimation and gait image representations. We propose four new gait image representations namely Gait Depth Energy Image (GDE), Partial GDE (PGDE), Discrete Cosine Transform GDE (DGDE) and Partial DGDE (PDGDE). The experimental results show that all the proposed gait image representations produce better accuracy than the previous methods. In addition, we have also developed Fusion GDEs (FGDEs) which achieve better overall accuracy and outperform the previous methods.},\n  keywords = {discrete cosine transforms;gait analysis;image colour analysis;image enhancement;image representation;image segmentation;stereo image processing;frontal view gait recognition method;3D imaging;time of flight camera;stereo RGB camera systems;ToF camera;silhouette segmentation;gait cycle estimation;gait image representations;Gait Depth Energy Image;GDE;gait image enhancement;partial GDE;discrete cosine transform GDE;partial DGDE;PDGDE;Cameras;Gait recognition;Accuracy;Image representation;Three-dimensional displays;Legged locomotion;Gait recognition;Gait data set;Time of Flight;Biometrics},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925403.pdf},\n}\n\n
\n
\n\n\n
\n Studies have been carried out to recognize individuals from a frontal view using their gait patterns. In previous work, gait sequences were captured using either single or stereo RGB camera systems or the Kinect 1.0 camera system. In this research, we used a new frontal view gait recognition method using a laser based Time of Flight (ToF) camera. In addition to the new gait data set, other contributions include enhancement of the silhouette segmentation, gait cycle estimation and gait image representations. We propose four new gait image representations namely Gait Depth Energy Image (GDE), Partial GDE (PGDE), Discrete Cosine Transform GDE (DGDE) and Partial DGDE (PDGDE). The experimental results show that all the proposed gait image representations produce better accuracy than the previous methods. In addition, we have also developed Fusion GDEs (FGDEs) which achieve better overall accuracy and outperform the previous methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Video steganalysis of multiplicative spread spectrum steganography.\n \n \n \n \n\n\n \n Zarmehi, N.; and Akhaee, M. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2440-2444, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"VideoPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952888,\n  author = {N. Zarmehi and M. A. Akhaee},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Video steganalysis of multiplicative spread spectrum steganography},\n  year = {2014},\n  pages = {2440-2444},\n  abstract = {In this paper we propose a video steganalysis method toward multiplicative spread spectrum embedding. We use the redundancies of the video frames to estimate the cover frame and after extracting some features from the video frames and the estimated ones, the received video is classified as suspicious or not suspicious. In the case that the video declared suspicious, we estimate the hidden message and the gain factor used in the embedder. We also propose a new method for estimating the gain factor in multiplicative spread spectrum embedding. Using the estimated hidden message and gain factor, we are able to reconstruct the original video. Simulation results verify the success of our steganalysis method.},\n  keywords = {feature extraction;image classification;spread spectrum communication;steganography;video signal processing;video steganalysis method;multiplicative spread spectrum steganography;multiplicative spread spectrum embedding;video frames;cover frame estimation;feature extraction;video classification;hidden message estimation;gain factor;Feature extraction;Estimation;Video sequences;Multimedia communication;Conferences;Support vector machines;Watermarking;Video steganalysis;spread spectrum steganography;frame estimation},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925437.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we propose a video steganalysis method toward multiplicative spread spectrum embedding. We use the redundancies of the video frames to estimate the cover frame and after extracting some features from the video frames and the estimated ones, the received video is classified as suspicious or not suspicious. In the case that the video declared suspicious, we estimate the hidden message and the gain factor used in the embedder. We also propose a new method for estimating the gain factor in multiplicative spread spectrum embedding. Using the estimated hidden message and gain factor, we are able to reconstruct the original video. Simulation results verify the success of our steganalysis method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the application of AAM-based systems in face recognition.\n \n \n \n \n\n\n \n Khan, M. A.; Xydeas, C.; and Ahmed, H.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2445-2449, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952889,\n  author = {M. A. Khan and C. Xydeas and H. Ahmed},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {On the application of AAM-based systems in face recognition},\n  year = {2014},\n  pages = {2445-2449},\n  abstract = {The presence of significant levels of signal variability in face-portrait type of images, due to differences in illumination, pose and expression, is generally been accepted as having an adverse effect on the overall performance of i) face modeling and synthesis (FM/S) and also on ii) face recognition (FR) systems. Furthermore, the dependency on such input data variability and thus the sensitivity, with respect to face synthesis performance, of Active Appearance Modeling (AAM), is also well understood. As a result, the Multi-Model Active Appearance Model (MM-AAM) technique [1] has been developed and shown to possess a superior face synthesis performance than AAM. This paper considers the applicability in FR applications of both AAM and MM-AAM face modeling and synthesis approaches. Thus, a MM-AAM methodology has been devised that is tailored to operate successfully within the context of face recognition. Experimental results show FR-MM-AAM to be significantly superior to conventional FR-AAM.},\n  keywords = {face recognition;AAM-based systems;face recognition;multimodel active appearance model;MM-AAM face modeling;MM-AAM face synthesis;FR-MM-AAM;Face;Active appearance model;Shape;Training;Face recognition;Principal component analysis;System performance;Face Recognition;Multi-Model Active Appearance Model},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925851.pdf},\n}\n\n
\n
\n\n\n
\n The presence of significant levels of signal variability in face-portrait type of images, due to differences in illumination, pose and expression, is generally been accepted as having an adverse effect on the overall performance of i) face modeling and synthesis (FM/S) and also on ii) face recognition (FR) systems. Furthermore, the dependency on such input data variability and thus the sensitivity, with respect to face synthesis performance, of Active Appearance Modeling (AAM), is also well understood. As a result, the Multi-Model Active Appearance Model (MM-AAM) technique [1] has been developed and shown to possess a superior face synthesis performance than AAM. This paper considers the applicability in FR applications of both AAM and MM-AAM face modeling and synthesis approaches. Thus, a MM-AAM methodology has been devised that is tailored to operate successfully within the context of face recognition. Experimental results show FR-MM-AAM to be significantly superior to conventional FR-AAM.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Textures and reversible watermarking.\n \n \n \n \n\n\n \n Dragoi, I.; and Coltuc, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2450-2454, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"TexturesPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952890,\n  author = {I. Dragoi and D. Coltuc},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Textures and reversible watermarking},\n  year = {2014},\n  pages = {2450-2454},\n  abstract = {This paper investigates the effectiveness of prediction-error expansion reversible watermarking on textured images. Five well performing reversible watermarking schemes are considered, namely the schemes based on the rhombus average, the adaptive rhombus predictor, the full context predictor as a weighted average between the rhombus and the four diagonal neighbors, the global least-squares predictor and its recently proposed local counterpart. The textured images are analyzed and the optimal prediction scheme for each texture type is determined. The local least-squares prediction based scheme provides the best overall results.},\n  keywords = {image texture;image watermarking;least squares approximations;prediction-error expansion reversible watermarking;textured images;adaptive rhombus predictor;global least-squares predictor;optimal prediction scheme;local least-squares prediction based scheme;Context;Watermarking;PSNR;Gain;Fabrics;Plastics;Correlation;reversible watermarking;textures;adaptive prediction;least square predictors},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924633.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates the effectiveness of prediction-error expansion reversible watermarking on textured images. Five well performing reversible watermarking schemes are considered, namely the schemes based on the rhombus average, the adaptive rhombus predictor, the full context predictor as a weighted average between the rhombus and the four diagonal neighbors, the global least-squares predictor and its recently proposed local counterpart. The textured images are analyzed and the optimal prediction scheme for each texture type is determined. The local least-squares prediction based scheme provides the best overall results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Three stages prediction-error expansion reversible watermarking.\n \n \n \n \n\n\n \n Nedelcu, T.; Iordache, R.; and Coltuc, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2455-2459, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ThreePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952891,\n  author = {T. Nedelcu and R. Iordache and D. Coltuc},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Three stages prediction-error expansion reversible watermarking},\n  year = {2014},\n  pages = {2455-2459},\n  abstract = {This paper proposes a three-stages difference expansion reversible watermarking scheme. In the first stage, a quarter of the pixels are estimated by using the median of the eight original neighbors of the 3×3 window. In the second stage, a quarter of the pixels are estimated as the average on the rhombus of the four horizontal and vertical original pixels. Finally, the remaining pixels are estimated on the rhombus context, using the modified pixels computed in the two previous stages. The experimental results show that the proposed scheme can provide slightly improved results than the classical two-stages reversible watermarking based on the rhombus context.},\n  keywords = {image watermarking;three stage prediction-error expansion reversible watermarking;three-stage difference expansion reversible watermarking scheme;image reversible watermarking;Watermarking;Context;Estimation;Equations;Educational institutions;Image edge detection;Europe;reversible watermarking;prediction-error expansion;three-stages embedding},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924977.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a three-stages difference expansion reversible watermarking scheme. In the first stage, a quarter of the pixels are estimated by using the median of the eight original neighbors of the 3×3 window. In the second stage, a quarter of the pixels are estimated as the average on the rhombus of the four horizontal and vertical original pixels. Finally, the remaining pixels are estimated on the rhombus context, using the modified pixels computed in the two previous stages. The experimental results show that the proposed scheme can provide slightly improved results than the classical two-stages reversible watermarking based on the rhombus context.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modified fuzzy c-means clustering for automatic tongue base tumour extraction from MRI data.\n \n \n \n \n\n\n \n Doshi, T.; Soraghan, J.; Grose, D.; MacKenzie, K.; and Petropoulakis, L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2460-2464, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"ModifiedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952892,\n  author = {T. Doshi and J. Soraghan and D. Grose and K. MacKenzie and L. Petropoulakis},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Modified fuzzy c-means clustering for automatic tongue base tumour extraction from MRI data},\n  year = {2014},\n  pages = {2460-2464},\n  abstract = {Magnetic resonance imaging (MRI) is a widely used imaging modality to extract tumour regions to assist in radiotherapy and surgery planning. Extraction of a tongue base tumour from MRI is challenging due to variability in its shape, size, intensities and fuzzy boundaries. This paper presents a new automatic algorithm that is shown to be able to extract tongue base tumour from gadolinium-enhanced T1-weighted (T1+Gd) MRI slices. In this algorithm, knowledge of tumour location is added to the objective function of standard fuzzy c-means (FCM) to extract the tumour region. Experimental results on 9 real MRI slices demonstrate that there is good agreement between manual and automatic extraction results with dice similarity coefficient (DSC) of 0.77±0.08.},\n  keywords = {biological organs;biomedical MRI;medical image processing;pattern clustering;radiation therapy;tumours;modified fuzzy C-means clustering;automatic tongue base tumour extraction;MRI data;magnetic resonance imaging;imaging modality;radiotherapy;surgery planning;fuzzy boundaries;gadolinium-enhanced T1-weighted MRI slices;T1-Gd MRI slices;tumour location;standard fuzzy c-means function;dice similarity coefficient;Tumors;Magnetic resonance imaging;Tongue;Manuals;Standards;Image segmentation;Clustering algorithms;automatic tumour extraction;fuzzy c-means;Hessian analysis;MRI;throat detection},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925577.pdf},\n}\n\n
\n
\n\n\n
\n Magnetic resonance imaging (MRI) is a widely used imaging modality to extract tumour regions to assist in radiotherapy and surgery planning. Extraction of a tongue base tumour from MRI is challenging due to variability in its shape, size, intensities and fuzzy boundaries. This paper presents a new automatic algorithm that is shown to be able to extract tongue base tumour from gadolinium-enhanced T1-weighted (T1+Gd) MRI slices. In this algorithm, knowledge of tumour location is added to the objective function of standard fuzzy c-means (FCM) to extract the tumour region. Experimental results on 9 real MRI slices demonstrate that there is good agreement between manual and automatic extraction results with dice similarity coefficient (DSC) of 0.77±0.08.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Accelerated unsupervised filtering for the smoothing of road pavement surface imagery.\n \n \n \n \n\n\n \n Oliveira, H.; Caeiro, J.; and Correia, P. L.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2465-2469, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"AcceleratedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952933,\n  author = {H. Oliveira and J. Caeiro and P. L. Correia},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Accelerated unsupervised filtering for the smoothing of road pavement surface imagery},\n  year = {2014},\n  pages = {2465-2469},\n  abstract = {An accelerated formulation of the Unsupervised Information-theoretic Adaptive Image Filtering (UINTA) method is presented. It is based on a parallel implementation of the algorithm, using the Open Computing Language (OpenCL), while maintaining the precision and efficiency of the original method, which are briefly discussed focusing on the respective computational complexities. The experimental computational efficiency is compared with the one obtained using the standard implementation, highlighting the significant improvement of computational times achieved with the proposed one. This new implementation is tested for the smoothing of road pavement surface images, for which the original method had been previously applied, showing the clear advantage of its use.},\n  keywords = {filtering theory;image processing;traffic engineering computing;unsupervised learning;accelerated unsupervised filtering;road pavement surface imagery;unsupervised information theoretic adaptive image filtering;UINTA method;parallel implementation;open computing language;OpenCL;road pavement surface images;Roads;Estimation;Entropy;Smoothing methods;Kernel;Filtering;Graphics processing units;Road crack detection;image filtering;density estimation;computational complexity;entropy reduction},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926753.pdf},\n}\n\n
\n
\n\n\n
\n An accelerated formulation of the Unsupervised Information-theoretic Adaptive Image Filtering (UINTA) method is presented. It is based on a parallel implementation of the algorithm, using the Open Computing Language (OpenCL), while maintaining the precision and efficiency of the original method, which are briefly discussed focusing on the respective computational complexities. The experimental computational efficiency is compared with the one obtained using the standard implementation, highlighting the significant improvement of computational times achieved with the proposed one. This new implementation is tested for the smoothing of road pavement surface images, for which the original method had been previously applied, showing the clear advantage of its use.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Mapping sounds onto images using binaural spectrograms.\n \n \n \n \n\n\n \n Deleforge, A.; Drouard, V.; Girin, L.; and Horaud, R.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2470-2474, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"MappingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952934,\n  author = {A. Deleforge and V. Drouard and L. Girin and R. Horaud},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Mapping sounds onto images using binaural spectrograms},\n  year = {2014},\n  pages = {2470-2474},\n  abstract = {We propose a novel method for mapping sound spectrograms onto images and thus enabling alignment between auditory and visual features for subsequent multimodal processing. We suggest a supervised learning approach to this audio-visual fusion problem, on the following grounds. Firstly, we use a Gaussian mixture of locally-linear regressions to learn a mapping from image locations to binaural spectrograms. Secondly, we derive a closed-form expression for the conditional posterior probability of an image location, given both an observed spectrogram, emitted from an unknown source direction, and the mapping parameters that were previously learnt. Prominently, the proposed method is able to deal with completely different spectrograms for training and for alignment. While fixed-length wide-spectrum sounds are used for learning, thus fully and robustly estimating the regression, variable-length sparse-spectrum sounds, e.g., speech, are used for alignment. The proposed method successfully extracts the image location of speech utterances in realistic reverberant-room scenarios.},\n  keywords = {Gaussian processes;image processing;learning (artificial intelligence);mixture models;probability;speech processing;binaural spectrograms;sound mapping;auditory features;visual features;subsequent multimodal processing;supervised learning approach;audio-visual fusion problem;Gaussian mixture;locally-linear regressions;image locations;closed-form expression;conditional posterior probability;fixed-length wide-spectrum sounds;variable-length sparse-spectrum sounds;speech utterances;realistic reverberant-room scenarios;Spectrogram;Speech;Acoustics;Training;Visualization;Vectors;Speech processing},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923293.pdf},\n}\n\n
\n
\n\n\n
\n We propose a novel method for mapping sound spectrograms onto images and thus enabling alignment between auditory and visual features for subsequent multimodal processing. We suggest a supervised learning approach to this audio-visual fusion problem, on the following grounds. Firstly, we use a Gaussian mixture of locally-linear regressions to learn a mapping from image locations to binaural spectrograms. Secondly, we derive a closed-form expression for the conditional posterior probability of an image location, given both an observed spectrogram, emitted from an unknown source direction, and the mapping parameters that were previously learnt. Prominently, the proposed method is able to deal with completely different spectrograms for training and for alignment. While fixed-length wide-spectrum sounds are used for learning, thus fully and robustly estimating the regression, variable-length sparse-spectrum sounds, e.g., speech, are used for alignment. The proposed method successfully extracts the image location of speech utterances in realistic reverberant-room scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bio-mechanical characterization of voice for smoking detection.\n \n \n \n \n\n\n \n Ben Jebara, S.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2475-2479, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Bio-mechanicalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952935,\n  author = {S. {Ben Jebara}},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Bio-mechanical characterization of voice for smoking detection},\n  year = {2014},\n  pages = {2475-2479},\n  abstract = {The purpose of this work is to discriminate between smoker and non-smoker speakers by analyzing their voice. In fact, the vocal folds, the main organ responsible of producing voice, is damaged by smoke so that its structure and its vibration are altered. Some bio-mechanical features, describing vocals folds behavior and status are used. They are based on the two-mass model which characterizes vocal folds by the mass, the stiffness and the losses of their cover and body parts. Bio-mechanical features of smokers and non-smokers are analyzed and compared to select relevant features permitting to discriminate between the two categories of speakers. The Quadratic Discriminant Analysis is used as a tool of classification and shows a relatively good rate of detection of smokers.},\n  keywords = {biomechanics;elasticity;feature selection;physiological models;speech;voice;smoking detection;vocal folds;organ;vibration;biomechanical feature characterization;two-mass model;stiffness;quadratic discriminant analysis;Correlation;Feature extraction;Biomechanics;Error analysis;Biological system modeling;Vibrations;Atmospheric modeling;Smoking detection;voice analysis;bio-mechanical features},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569921959.pdf},\n}\n\n
\n
\n\n\n
\n The purpose of this work is to discriminate between smoker and non-smoker speakers by analyzing their voice. In fact, the vocal folds, the main organ responsible of producing voice, is damaged by smoke so that its structure and its vibration are altered. Some bio-mechanical features, describing vocals folds behavior and status are used. They are based on the two-mass model which characterizes vocal folds by the mass, the stiffness and the losses of their cover and body parts. Bio-mechanical features of smokers and non-smokers are analyzed and compared to select relevant features permitting to discriminate between the two categories of speakers. The Quadratic Discriminant Analysis is used as a tool of classification and shows a relatively good rate of detection of smokers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection of faulty glucose measurements using texture analysis.\n \n \n \n \n\n\n \n Demitri, N.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2480-2484, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DetectionPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952936,\n  author = {N. Demitri and A. M. Zoubir},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Detection of faulty glucose measurements using texture analysis},\n  year = {2014},\n  pages = {2480-2484},\n  abstract = {Faults occurring in hand-held blood glucose measurements can be critical to patient self-monitoring, as they can lead to unnecessary changes of treatment. We propose a method to detect faulty glucose measurement frames in devices that use a camera to estimate the glucose concentration. We assert that texture, as opposed to intensity, is able to differentiate between correct and false glucose measurements, regardless of the given blood sample. The co-occurrence based textural features energy, maximum probability and correlation prove to be suitable for our detection application. We calculate kinetic feature curves and use a hypothesis testing approach to detect faulty measurements. Our method is able to detect a faulty measurement after less than one third of the time, which would usually be needed. The validation of our method is done using a real data set of blood glucose measurements obtained using different glucose concentrations and containing both correct and faulty measurements.},\n  keywords = {biomedical measurement;blood;diseases;image sensors;patient diagnosis;patient monitoring;sugar;faulty glucose measurement detection;texture analysis;hand-held blood glucose measurements;patient self-monitoring;camera;glucose concentration;blood sample;cooccurrence based textural features energy;maximum probability;correlation prove;detection application;kinetic feature curves;blood glucose measurements;Sugar;Chemicals;Blood;Convergence;Biomedical measurement;Feature extraction;Strips;GLCM-based features;texture analysis;anomaly detection;blood glucose measurement},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923091.pdf},\n}\n\n
\n
\n\n\n
\n Faults occurring in hand-held blood glucose measurements can be critical to patient self-monitoring, as they can lead to unnecessary changes of treatment. We propose a method to detect faulty glucose measurement frames in devices that use a camera to estimate the glucose concentration. We assert that texture, as opposed to intensity, is able to differentiate between correct and false glucose measurements, regardless of the given blood sample. The co-occurrence based textural features energy, maximum probability and correlation prove to be suitable for our detection application. We calculate kinetic feature curves and use a hypothesis testing approach to detect faulty measurements. Our method is able to detect a faulty measurement after less than one third of the time, which would usually be needed. The validation of our method is done using a real data set of blood glucose measurements obtained using different glucose concentrations and containing both correct and faulty measurements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bi-CoPaM ensemble clustering application to five Escherichia coli bacterial datasets.\n \n \n \n \n\n\n \n Abu-Jamous, B.; Fa, R.; Roberts, D. J.; and Nandi, A. K.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2485-2489, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Bi-CoPaMPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952937,\n  author = {B. Abu-Jamous and R. Fa and D. J. Roberts and A. K. Nandi},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Bi-CoPaM ensemble clustering application to five Escherichia coli bacterial datasets},\n  year = {2014},\n  pages = {2485-2489},\n  abstract = {Bi-CoPaM ensemble clustering has the ability to mine a set of microarray datasets collectively to identify the subsets of genes consistently co-expressed in all of them. It also has the capability of considering the entire gene set without pre-filtering as it implicitly filters out less interesting genes. While it showed success in revealing new insights into the biology of yeast, it has never been applied to bacteria. In this study, we apply Bi-CoPaM to five bacterial datasets, identifying two clusters of genes as the most consistently co-expressed. Strikingly, their average profiles are consistently negatively correlated in most of the datasets. Thus, we hypothesise that they are regulated by a common biological machinery, and that their genes with unknown biological processes may be participating in the same processes in which most of their genes known to participate. Additionally, our results demonstrate the applicability of Bi-CoPaM to a wide range of species.},\n  keywords = {genomics;lab-on-a-chip;biological processes;microarray datasets;Escherichia coli bacterial datasets;Microorganisms;Genomics;Bioinformatics;Biological processes;Gene expression;Filtering;Bi-CoPaM;microarray data analysis;gene clustering;Escherichia coli bacteria},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569923243.pdf},\n}\n\n
\n
\n\n\n
\n Bi-CoPaM ensemble clustering has the ability to mine a set of microarray datasets collectively to identify the subsets of genes consistently co-expressed in all of them. It also has the capability of considering the entire gene set without pre-filtering as it implicitly filters out less interesting genes. While it showed success in revealing new insights into the biology of yeast, it has never been applied to bacteria. In this study, we apply Bi-CoPaM to five bacterial datasets, identifying two clusters of genes as the most consistently co-expressed. Strikingly, their average profiles are consistently negatively correlated in most of the datasets. Thus, we hypothesise that they are regulated by a common biological machinery, and that their genes with unknown biological processes may be participating in the same processes in which most of their genes known to participate. Additionally, our results demonstrate the applicability of Bi-CoPaM to a wide range of species.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Generation of stimulus features for analysis of FMRI during natural auditory experiences.\n \n \n \n \n\n\n \n Tsatsishvili, V.; Cong, F.; Ristaniemi, T.; Toiviainen, P.; Alluri, V.; Brattico, E.; and Nandi, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2490-2494, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"GenerationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952938,\n  author = {V. Tsatsishvili and F. Cong and T. Ristaniemi and P. Toiviainen and V. Alluri and E. Brattico and A. Nandi},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Generation of stimulus features for analysis of FMRI during natural auditory experiences},\n  year = {2014},\n  pages = {2490-2494},\n  abstract = {In contrast to block and event-related designs for fMRI experiments, it becomes much more difficult to extract events of interest in the complex continuous stimulus for finding corresponding blood-oxygen-level dependent (BOLD) responses. Recently, in a free music listening fMRI experiment, acoustic features of the naturalistic music stimulus were first extracted, and then principal component analysis (PCA) was applied to select the features of interest acting as the stimulus sequences. For feature generation, kernel PCA has shown its superiority over PCA in various applications, since it can implicitly exploit nonlinear relationship among features and such relationship seems to exist generally. Here, we applied kernel PCA to select the musical features and obtained an interesting new musical feature in contrast to PCA features. With the new feature, we found similar fMRI results compared with those by PCA features, indicating that kernel PCA assists to capture more properties of the naturalistic music stimulus.},\n  keywords = {biomedical MRI;blood;medical image processing;principal component analysis;stimulus features;natural auditory experiences;blood-oxygen-level dependent response;BOLD response;free music listening fMRI experiment;acoustic features;principal component analysis;PCA;Principal component analysis;Kernel;Feature extraction;Correlation;Decoding;Brightness;Music;kernel PCA;ICA;Polynomial kernel;naturalistic music;fMRI},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924563.pdf},\n}\n\n
\n
\n\n\n
\n In contrast to block and event-related designs for fMRI experiments, it becomes much more difficult to extract events of interest in the complex continuous stimulus for finding corresponding blood-oxygen-level dependent (BOLD) responses. Recently, in a free music listening fMRI experiment, acoustic features of the naturalistic music stimulus were first extracted, and then principal component analysis (PCA) was applied to select the features of interest acting as the stimulus sequences. For feature generation, kernel PCA has shown its superiority over PCA in various applications, since it can implicitly exploit nonlinear relationship among features and such relationship seems to exist generally. Here, we applied kernel PCA to select the musical features and obtained an interesting new musical feature in contrast to PCA features. With the new feature, we found similar fMRI results compared with those by PCA features, indicating that kernel PCA assists to capture more properties of the naturalistic music stimulus.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n DSp in heterogeneous multicore embedded systems - A laboratory experiment.\n \n \n \n \n\n\n \n Lifshits, P.; Eilam, A.; Moshe, Y.; and Peleg, N.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2495-2499, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"DSpPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952939,\n  author = {P. Lifshits and A. Eilam and Y. Moshe and N. Peleg},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {DSp in heterogeneous multicore embedded systems - A laboratory experiment},\n  year = {2014},\n  pages = {2495-2499},\n  abstract = {Undergraduate engineering students who are learning Digital Signal Processing (DSP) are expected to have the ability to implement their theoretical knowledge in various applications soon after graduation. In this paper, we present a laboratory experiment developed for undergraduate students that addresses the challenge of getting them familiar with implementing DSP algorithms in heterogeneous multicore systems. In a top-down approach, the students first gain control of the development environment, and then implement DSP algorithms on a general purpose and on a digital signal processor core. Through the experiment, they get to appreciate the advantages of DSP core architecture in performing signal processing algorithms, and learn methods for timing and data transfer between cores while meeting real-time constraints. In a limited time frame, this hands-on laboratory experiment exposes the students to state-of-the-art multicore development practices and increases their knowledge and interest in DSP and in embedded programming.},\n  keywords = {electrical engineering education;multiprocessing systems;signal processing;digital signal processing algorithm;heterogeneous multicore embedded systems;top-down approach;DSP algorithms;real-time constraints;embedded programming;Digital signal processing;Multicore processing;Real-time systems;Signal processing algorithms;Adaptive filters;Laboratories;Electrical engineering education;heterogeneous multicore processing;digital signal processing;fixed-point arithmetic;BeagleBoard},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569924717.pdf},\n}\n\n
\n
\n\n\n
\n Undergraduate engineering students who are learning Digital Signal Processing (DSP) are expected to have the ability to implement their theoretical knowledge in various applications soon after graduation. In this paper, we present a laboratory experiment developed for undergraduate students that addresses the challenge of getting them familiar with implementing DSP algorithms in heterogeneous multicore systems. In a top-down approach, the students first gain control of the development environment, and then implement DSP algorithms on a general purpose and on a digital signal processor core. Through the experiment, they get to appreciate the advantages of DSP core architecture in performing signal processing algorithms, and learn methods for timing and data transfer between cores while meeting real-time constraints. In a limited time frame, this hands-on laboratory experiment exposes the students to state-of-the-art multicore development practices and increases their knowledge and interest in DSP and in embedded programming.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Twave alternans detection in ecg using Extended Kalman Filter and dualrate EKF.\n \n \n \n\n\n \n Akhbari, M.; Shamsollahi, M. B.; and Jutten, C.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2500-2504, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952940,\n  author = {M. Akhbari and M. B. Shamsollahi and C. Jutten},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Twave alternans detection in ecg using Extended Kalman Filter and dualrate EKF},\n  year = {2014},\n  pages = {2500-2504},\n  abstract = {T Wave Alternans (TWA) is considered as an indicator of Sudden Cardiac Death (SCD). In this paper for TWA detection, a method based on a nonlinear dynamic model is presented. For estimating the model parameters, we use an Extended Kalman Filter (EKF). We propose EKF6 and dualrate EKF6 approaches. Dualrate EKF is suitable for modeling the states which are not updated in all time instances. Quantitative and qualitative evaluations of the proposed method have been done on TWA challenge database. We compare our method with that proposed by Sieed et al. in TWA challenge 2008. We also compare our method with our previous proposed approach (EKF25-4obs). Results show that the proposed method can detect peak position and amplitude of T waves in ECG precisely. Mean and standard deviation of estimation error of our method for finding position of T waves do not exceed four samples (8 msec).},\n  keywords = {electrocardiography;Kalman filters;medical signal processing;nonlinear filters;T wave alternans detection;ECG;extended KALMAN filter;dualrate EKF;TWA detection;nonlinear dynamic model;dualrate EKF6 approach;quantitative evaluation;qualitative evaluation;position detection;electrocardiogram;sudden cardiac death;SCD;Electrocardiography;Kalman filters;Mathematical model;Databases;Noise reduction;Equations;Time measurement;Electrocardiogram (ECG);TWave Alternans (TWA);Extended Kalman Filter (EKF);Dualrate EKF},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n T Wave Alternans (TWA) is considered as an indicator of Sudden Cardiac Death (SCD). In this paper for TWA detection, a method based on a nonlinear dynamic model is presented. For estimating the model parameters, we use an Extended Kalman Filter (EKF). We propose EKF6 and dualrate EKF6 approaches. Dualrate EKF is suitable for modeling the states which are not updated in all time instances. Quantitative and qualitative evaluations of the proposed method have been done on TWA challenge database. We compare our method with that proposed by Sieed et al. in TWA challenge 2008. We also compare our method with our previous proposed approach (EKF25-4obs). Results show that the proposed method can detect peak position and amplitude of T waves in ECG precisely. Mean and standard deviation of estimation error of our method for finding position of T waves do not exceed four samples (8 msec).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A new spontaneous expression database and a study of classification-based expression analysis methods.\n \n \n \n\n\n \n Aina, S.; Zhou, M.; Chambers, J. A.; and Phan, R. C. -.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2505-2509, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952941,\n  author = {S. Aina and M. Zhou and J. A. Chambers and R. C. -. Phan},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A new spontaneous expression database and a study of classification-based expression analysis methods},\n  year = {2014},\n  pages = {2505-2509},\n  abstract = {In this paper we introduce a new spontaneous expression database, which is under development as a new open resource for researchers working in expression analysis. It is particularly targeted at providing a wider number of expression classes contained within the small number of natural expression databases currently available so that it can be used as a benchmark for comparative studies. We also present the first comparison between kernel-based Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA), in combination with a Sparse Representation Classifier (SRC), based classifier for expression analysis. We highlight the trade-off between performance and computation time; which are critical parameters in emerging systems which must capture the expression of a human, such as a consumer responding to some promotional material.},\n  keywords = {face recognition;image classification;image representation;principal component analysis;spontaneous expression database;classification-based expression analysis;open resource;expression classes;natural expression databases;kernel-based principal component analysis;PCA;Fisher linear discriminant analysis;FLDA;sparse representation classifier;SRC;Databases;Principal component analysis;Feature extraction;Kernel;Face recognition;Error analysis;Training;Fisher's Discriminant Analysis;Kernel;Principal Component;Sparsity;Spontaneous Expression Classification},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n In this paper we introduce a new spontaneous expression database, which is under development as a new open resource for researchers working in expression analysis. It is particularly targeted at providing a wider number of expression classes contained within the small number of natural expression databases currently available so that it can be used as a benchmark for comparative studies. We also present the first comparison between kernel-based Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA), in combination with a Sparse Representation Classifier (SRC), based classifier for expression analysis. We highlight the trade-off between performance and computation time; which are critical parameters in emerging systems which must capture the expression of a human, such as a consumer responding to some promotional material.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Performance improvement of spread spectrum additive data hiding over codec-distorted voice channels.\n \n \n \n \n\n\n \n Boloursaz, M.; Kazemi, R.; Behnia, F.; and Akhaee, M. A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2510-2514, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PerformancePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952942,\n  author = {M. Boloursaz and R. Kazemi and F. Behnia and M. A. Akhaee},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Performance improvement of spread spectrum additive data hiding over codec-distorted voice channels},\n  year = {2014},\n  pages = {2510-2514},\n  abstract = {This paper considers the problem of covert communication through dedicated voice channels by embedding secure data in the cover speech signal utilizing spread spectrum additive data hiding. The cover speech signal is modeled by a Generalized Gaussian (GGD) random variable and the Maximum A Posteriori (MAP) detector for extraction of the covert message is designed and its reliable performance is verified both analytically and by simulations. The idea of adaptive estimation of detector parameters is proposed to improve detector performance and overcome voice non-stationarity. The detector's bit error rate (BER) is investigated for both blind and semi-blind cases in which the GGD shape parameter needed for optimum detection is either estimated from the stego or cover signal respectively. The simulation results also show that the proposed method achieves acceptable robustness against the lossy compression attack by different compression rates of Adaptive Multi Rate (AMR) voice codec.},\n  keywords = {adaptive estimation;data compression;error statistics;Gaussian distribution;maximum likelihood detection;spread spectrum communication;steganography;vocoders;generalized Gaussian distribution;AMR voice codec;adaptive multirate voice codec;compression rates;lossy compression attack;GGD shape parameter;BER;bit error rate;voice non-stationarity;adaptive estimation;MAP detector;maximum a posteriori detector;GGD random variable;speech signal;dedicated voice channels;covert communication;codec-distorted voice channels;spread spectrum additive data hiding;Speech;Detectors;Codecs;Bit error rate;Speech coding;GSM;Vocoders;Data Hiding;Generalized Gaussian Distribution (GGD);Maximum A Posteriori (MAP) detector;Adaptive Multi Rate (AMR) compression attack},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925355.pdf},\n}\n\n
\n
\n\n\n
\n This paper considers the problem of covert communication through dedicated voice channels by embedding secure data in the cover speech signal utilizing spread spectrum additive data hiding. The cover speech signal is modeled by a Generalized Gaussian (GGD) random variable and the Maximum A Posteriori (MAP) detector for extraction of the covert message is designed and its reliable performance is verified both analytically and by simulations. The idea of adaptive estimation of detector parameters is proposed to improve detector performance and overcome voice non-stationarity. The detector's bit error rate (BER) is investigated for both blind and semi-blind cases in which the GGD shape parameter needed for optimum detection is either estimated from the stego or cover signal respectively. The simulation results also show that the proposed method achieves acceptable robustness against the lossy compression attack by different compression rates of Adaptive Multi Rate (AMR) voice codec.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Bayesian method to quantifying chemical composition using NMR: Application to porous media systems.\n \n \n \n \n\n\n \n Wu, Y.; Holland, D. J.; Mantle, M. D.; Wilson, A. G.; Nowozin, S.; Blake, A.; and Gladden, L. F.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2515-2519, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952943,\n  author = {Y. Wu and D. J. Holland and M. D. Mantle and A. G. Wilson and S. Nowozin and A. Blake and L. F. Gladden},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {A Bayesian method to quantifying chemical composition using NMR: Application to porous media systems},\n  year = {2014},\n  pages = {2515-2519},\n  abstract = {This paper describes a Bayesian approach for inferring the chemical composition of liquids in porous media obtained using nuclear magnetic resonance (NMR). The model analyzes NMR data automatically in the time domain, eliminating the operator dependence of a conventional spectroscopy approach. The technique is demonstrated and validated experimentally on both pure liquids and liquids imbibed in porous media systems, which are of significant interest in heterogeneous catalysis research. We discuss the challenges and practical solutions of parameter estimation in both systems. The proposed Bayesian NMR approach is shown to be more accurate and robust than a conventional spectroscopy approach, particularly for signals with a low signal-to-noise ratio (SNR) and a short life time.},\n  keywords = {Bayes methods;chemical analysis;nuclear magnetic resonance;porous materials;Bayesian method;chemical composition;NMR;porous media systems;nuclear magnetic resonance;heterogeneous catalysis;parameter estimation;Nuclear magnetic resonance;Bayes methods;Chemicals;Liquids;Media;Signal to noise ratio;Magnetic liquids;NMR spectroscopy;Bayesian inference;porous media;chemical quantification},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569925479.pdf},\n}\n\n
\n
\n\n\n
\n This paper describes a Bayesian approach for inferring the chemical composition of liquids in porous media obtained using nuclear magnetic resonance (NMR). The model analyzes NMR data automatically in the time domain, eliminating the operator dependence of a conventional spectroscopy approach. The technique is demonstrated and validated experimentally on both pure liquids and liquids imbibed in porous media systems, which are of significant interest in heterogeneous catalysis research. We discuss the challenges and practical solutions of parameter estimation in both systems. The proposed Bayesian NMR approach is shown to be more accurate and robust than a conventional spectroscopy approach, particularly for signals with a low signal-to-noise ratio (SNR) and a short life time.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stochastic modeling of EEG rhythms with fractional Gaussian Noise.\n \n \n \n \n\n\n \n Karlekar, M.; and Gupta, A.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2520-2524, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"StochasticPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952944,\n  author = {M. Karlekar and A. Gupta},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Stochastic modeling of EEG rhythms with fractional Gaussian Noise},\n  year = {2014},\n  pages = {2520-2524},\n  abstract = {This paper presents a novel approach to signal modeling for EEG signal rhythms. A new method of 3-stage DCT based multirate filterbank is proposed for the decomposition of EEG signals into brain rhythms: delta, theta, alpha, beta, and gamma rhythms. It is shown that theta, alpha, and gamma rhythms can be modeled as 1st order fractional Gaussian Noise (fGn), while the beta rhythms can be modeled as 2nd order fGn processes. These fGn processes are stationary random processes. Further, it is shown that the delta subband imbibes all the nonstationarity of EEG signals and can be modeled as a 1st order fractional Brownian motion (fBm) process. The modeling of subbands is characterized by Hurst exponent, estimated using maximum likelihood (ML) estimation method. The modeling approach has been tested on two public databases.},\n  keywords = {bioelectric potentials;brain;Brownian motion;discrete cosine transforms;electroencephalography;Gaussian noise;maximum likelihood estimation;medical signal processing;stochastic modeling;EEG rhythms;signal modeling;EEG signal rhythms;3-stage DCT based multirate filterbank;EEG signal decomposition;brain rhythm;delta rhythm;theta rhythm;alpha rhythm;beta rhythm;gamma rhythm;1st-order fractional Gaussian Noise;1st order fGn;2nd-order fGn processes;EEG signal nonstationarity;1st-order fractional Brownian motion;Hurst exponent;maximum likelihood estimation method;ML estimation method;public databases;discrete cosine transform;Electroencephalography;Discrete cosine transforms;Brain models;Brownian motion;Maximum likelihood estimation;Fractional Gaussian noise;EEG;DCT},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569926449.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a novel approach to signal modeling for EEG signal rhythms. A new method of 3-stage DCT based multirate filterbank is proposed for the decomposition of EEG signals into brain rhythms: delta, theta, alpha, beta, and gamma rhythms. It is shown that theta, alpha, and gamma rhythms can be modeled as 1st order fractional Gaussian Noise (fGn), while the beta rhythms can be modeled as 2nd order fGn processes. These fGn processes are stationary random processes. Further, it is shown that the delta subband imbibes all the nonstationarity of EEG signals and can be modeled as a 1st order fractional Brownian motion (fBm) process. The modeling of subbands is characterized by Hurst exponent, estimated using maximum likelihood (ML) estimation method. The modeling approach has been tested on two public databases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Feasibility of single-arm single-lead ECG biometrics.\n \n \n \n \n\n\n \n Raj, P. S.; and Hatzinakos, D.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2525-2529, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"FeasibilityPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952945,\n  author = {P. S. Raj and D. Hatzinakos},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Feasibility of single-arm single-lead ECG biometrics},\n  year = {2014},\n  pages = {2525-2529},\n  abstract = {This work analyses the feasibility of electrocardiogram (ECG) biometrics using signals from a novel single arm single-lead acquisition methodology. These new signals are used and analysed in a biometric recognition system in verification mode for validation of a person's identity enrolled in a system database. The algorithm used for recognition in the proposed system is the Autocorrelation/Linear Discriminant Analysis (AC/LDA), which is combined with preprocessing stages tuned to the characteristics for ECG from the single arm. The signal is collected from 23 subjects in three scenarios and performance of the proposed scheme is evaluated. Considerably low Equal Error Rate of 4.34% is obtained using the described method, establishing the utility of these signals as viable candidates for ECG Biometrics.},\n  keywords = {biometrics (access control);database management systems;electrocardiography;medical signal processing;single arm single-lead ECG biometrics;electrocardiogram;biometric recognition system;verification mode;database system;autocorrelation/linear discriminant analysis;AC/LDA;equal error rate;Electrocardiography;Biometrics (access control);Electrodes;Databases;Correlation;Performance analysis;Measurement;ECG;single arm;single lead;feasibility;AC/LDA;biometrics;equal error rate;verification},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569927349.pdf},\n}\n\n
\n
\n\n\n
\n This work analyses the feasibility of electrocardiogram (ECG) biometrics using signals from a novel single arm single-lead acquisition methodology. These new signals are used and analysed in a biometric recognition system in verification mode for validation of a person's identity enrolled in a system database. The algorithm used for recognition in the proposed system is the Autocorrelation/Linear Discriminant Analysis (AC/LDA), which is combined with preprocessing stages tuned to the characteristics for ECG from the single arm. The signal is collected from 23 subjects in three scenarios and performance of the proposed scheme is evaluated. Considerably low Equal Error Rate of 4.34% is obtained using the described method, establishing the utility of these signals as viable candidates for ECG Biometrics.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Personalizing a smartwatch-based gesture interface with transfer learning.\n \n \n \n \n\n\n \n Costante, G.; Porzi, L.; Lanz, O.; Valigi, P.; and Ricci, E.\n\n\n \n\n\n\n In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 2530-2534, Sep. 2014. \n \n\n\n\n
\n\n\n\n \n \n \"PersonalizingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{6952946,\n  author = {G. Costante and L. Porzi and O. Lanz and P. Valigi and E. Ricci},\n  booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n  title = {Personalizing a smartwatch-based gesture interface with transfer learning},\n  year = {2014},\n  pages = {2530-2534},\n  abstract = {The widespread adoption of mobile devices has lead to an increased interest toward smartphone-based solutions for supporting visually impaired users. Unfortunately the touch-based interaction paradigm commonly adopted on most devices is not convenient for these users, motivating the study of different interaction technologies. In this paper, following up on our previous work, we consider a system where a smartwatch is exploited to provide hands-free interaction through arm gestures with an assistive application running on a smartphone. In particular we focus on the task of effortlessly customizing the gesture recognition system with new gestures specified by the user. To address this problem we propose an approach based on a novel transfer metric learning algorithm, which exploits prior knowledge about a predefined set of gestures to improve the recognition of user-defined ones, while requiring only few novel training samples. The effectiveness of the proposed method is demonstrated through an extensive experimental evaluation.},\n  keywords = {gesture recognition;Haar transforms;learning (artificial intelligence);smart phones;user interfaces;smartwatch-based gesture interface;smartphone-based solutions;touch-based interaction paradigm;hands-free interaction;arm gestures;gesture recognition system;novel transfer metric learning algorithm;Haar coefficients;Abstracts;Computers;Gesture recognition;smartwatch;transfer learning;Haar features;visual impairments},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569922319.pdf},\n}\n
\n
\n\n\n
\n The widespread adoption of mobile devices has lead to an increased interest toward smartphone-based solutions for supporting visually impaired users. Unfortunately the touch-based interaction paradigm commonly adopted on most devices is not convenient for these users, motivating the study of different interaction technologies. In this paper, following up on our previous work, we consider a system where a smartwatch is exploited to provide hands-free interaction through arm gestures with an assistive application running on a smartphone. In particular we focus on the task of effortlessly customizing the gesture recognition system with new gestures specified by the user. To address this problem we propose an approach based on a novel transfer metric learning algorithm, which exploits prior knowledge about a predefined set of gestures to improve the recognition of user-defined ones, while requiring only few novel training samples. The effectiveness of the proposed method is demonstrated through an extensive experimental evaluation.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);