Optical Flow Prediction for Blind and Non-Blind Video Error Concealment Using Deep Neural Networks. Sankisa, A., Punjabi, A., & Katsaggelos, A. K. International Journal of Multimedia Data Engineering and Management, 10(3):27–46, IEEE, jul, 2019.
Optical Flow Prediction for Blind and Non-Blind Video Error Concealment Using Deep Neural Networks [link]Paper  doi  abstract   bibtex   
A novel optical flow prediction model using an adaptable deep neural network architecture for blind and non-blind error concealment of videos degraded by transmission loss is presented. The two-stream network model is trained by separating the horizontal and vertical motion fields which are passed through two similar parallel pipelines that include traditional convolutional (Conv) and convolutional long short-term memory (ConvLSTM) layers. The ConvLSTM layers extract temporally correlated motion information while the Conv layers correlate motion spatially. The optical flows used as input to the two-pipeline prediction network are obtained through a flow generation network that can be easily interchanged, increasing the adaptability of the overall end-to-end architecture. The performance of the proposed model is evaluated using real-world packet loss scenarios. Standard video quality metrics are used to compare frames reconstructed using predicted optical flows with those reconstructed using “ground-truth” flows obtained directly from the generator.
@article{Arun2019,
abstract = {A novel optical flow prediction model using an adaptable deep neural network architecture for blind and non-blind error concealment of videos degraded by transmission loss is presented. The two-stream network model is trained by separating the horizontal and vertical motion fields which are passed through two similar parallel pipelines that include traditional convolutional (Conv) and convolutional long short-term memory (ConvLSTM) layers. The ConvLSTM layers extract temporally correlated motion information while the Conv layers correlate motion spatially. The optical flows used as input to the two-pipeline prediction network are obtained through a flow generation network that can be easily interchanged, increasing the adaptability of the overall end-to-end architecture. The performance of the proposed model is evaluated using real-world packet loss scenarios. Standard video quality metrics are used to compare frames reconstructed using predicted optical flows with those reconstructed using “ground-truth” flows obtained directly from the generator.},
author = {Sankisa, Arun and Punjabi, Arjun and Katsaggelos, Aggelos K.},
doi = {10.4018/IJMDEM.2019070102},
isbn = {978-1-4799-7061-2},
issn = {1947-8534},
journal = {International Journal of Multimedia Data Engineering and Management},
keywords = {CNN,ConvLSTM,Deep Neural Networks,Optical flow,Video Error Concealment},
month = {jul},
number = {3},
pages = {27--46},
publisher = {IEEE},
title = {{Optical Flow Prediction for Blind and Non-Blind Video Error Concealment Using Deep Neural Networks}},
url = {https://ieeexplore.ieee.org/document/8451090/ http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/IJMDEM.2019070102},
volume = {10},
year = {2019}
}

Downloads: 0