A learning-based visual saliency fusion model for High Dynamic Range video (LBVS-HDR). Banitalebi-Dehkordi, A., Dong, Y., Pourazad, M. T., & Nasiopoulos, P. In 2015 23rd European Signal Processing Conference (EUSIPCO), pages 1541-1545, Aug, 2015.
A learning-based visual saliency fusion model for High Dynamic Range video (LBVS-HDR) [pdf]Paper  doi  abstract   bibtex   
Saliency prediction for Standard Dynamic Range (SDR) videos has been well explored in the last decade. However, limited studies are available on High Dynamic Range (HDR) Visual Attention Models (VAMs). Considering that the characteristic of HDR content in terms of dynamic range and color gamut is quite different than those of SDR content, it is essential to identify the importance of different saliency attributes of HDR videos for designing a VAM and understand how to combine these features. To this end we propose a learning-based visual saliency fusion method for HDR content (LVBS-HDR) to combine various visual saliency features. In our approach various conspicuity maps are extracted from HDR data, and then for fusing conspicuity maps, a Random Forests algorithm is used to train a model based on the collected data from an eye-tracking experiment. Performance evaluations demonstrate the superiority of the proposed fusion method against other existing fusion methods.

Downloads: 0