A Viewport-Driven Multi-Metric Fusion Approach for 360-Degree Video Quality Assessment. de Albuquerque Azevedo, R. G., Birkbeck, N., Janatra, I., Adsumilli, B., & Frossard, P. In 2020 IEEE International Conference on Multimedia and Expo (ICME), pages 1-6, July, 2020.
doi  abstract   bibtex   14 downloads  
We propose a new viewport-based multi-metric fusion (MMF) approach for visual quality assessment of 360-degree (omnidirectional) videos. Our method is based on computing multiple spatio-temporal objective quality metrics (features) on viewports extracted from 360-degree videos, and learning a model that combines these features into a metric, which closely matches subjective quality scores. The main motivations for the proposed method are that: 1) quality metrics computed on viewports better captures the user experience than metrics computed on the projection domain; 2) no individual objective image quality metric always performs best for all types of visual distortions, while a learned combination of them is able to adapt to different conditions and produce better results overall. Experimental results, based on the largest available 360-degree videos quality dataset, demonstrate that the proposed metric outperforms state-of-the-art 360-degree and 2D video quality metrics.
@inproceedings{2020_07_azevedo,
author={de Albuquerque Azevedo, Roberto Gerson and Birkbeck, Neil and
Janatra, Ivan and Adsumilli, Balu and Frossard, Pascal},
booktitle={2020 IEEE International Conference on Multimedia and Expo (ICME)},
title={A Viewport-Driven Multi-Metric Fusion Approach for 360-Degree Video
Quality Assessment},
year={2020},
volume={},
number={},
pages={1-6},
abstract={We propose a new viewport-based multi-metric fusion (MMF) approach
for visual quality assessment of 360-degree (omnidirectional) videos. Our
method is based on computing multiple spatio-temporal objective quality
metrics (features) on viewports extracted from 360-degree videos, and
learning a model that combines these features into a metric, which closely
matches subjective quality scores. The main motivations for the proposed
method are that: 1) quality metrics computed on viewports better captures the
user experience than metrics computed on the projection domain; 2) no
individual objective image quality metric always performs best for all types
of visual distortions, while a learned combination of them is able to adapt
to different conditions and produce better results overall. Experimental
results, based on the largest available 360-degree videos quality dataset,
demonstrate that the proposed metric outperforms state-of-the-art 360-degree
and 2D video quality metrics.},
keywords={Measurement;Quality assessment;Visualization;Video
recording;Feature extraction;Two dimensional displays;Distortion;visual
quality assessment;omnidirectional video;360-degree video;multi-metric
fusion},
doi={10.1109/ICME46284.2020.9102936},
issn={1945-788X},
month={July},
}

Downloads: 14