PassFrame: Generating Image-based Passwords from Egocentric Videos. Nguyen, L. N. & Sigg, S. In 2017 IEEE International Conference on Pervasive Computing and Communication (WiP), March, 2017.
abstract   bibtex   
Wearable cameras have been widely used not only in sport and public safety but also as life-logging gadgets. They record diverse visual information that is meaningful to the users. In this paper, we analyse first-person-view videos to develop a personalized user authentication mechanism. Our proposed algorithm generates provisional passwords which benefit a variety of purposes such as normally unlocking a mobile device or fallback authentication. First, representative frames are extracted from the egocentric videos. Then, they are split into distinguishable segments before a clustering procedure is applied to discard repetitive scenes. The whole process aims to retain memorable images to form the authentication challenges. We integrate eye tracking data to select informative sequences of video frames and suggest another alternative method if an eye-facing camera is not available. To evaluate our system, we perform the experiments in different settings including object-interaction activities and traveling contexts. Even though our mechanism produces variable passwords, the log-in effort is comparable with approaches based on static challenges.
@INPROCEEDINGS{LeWiP_2017_PerCom,
author={Le Ngu Nguyen and Stephan Sigg},
booktitle={2017 IEEE International Conference on Pervasive Computing and Communication (WiP)},
title={PassFrame: Generating Image-based Passwords from Egocentric Videos},
year={2017},
abstract={Wearable cameras have been widely used not only in sport and public safety but also as life-logging gadgets. They record diverse visual information that is meaningful to the users. In this paper, we analyse first-person-view videos to develop a personalized user authentication mechanism. Our proposed algorithm generates provisional passwords which benefit a variety of purposes such as normally unlocking a mobile device or fallback authentication. First, representative frames are extracted from the egocentric videos. Then, they are split into distinguishable segments before a clustering procedure is applied to discard repetitive scenes. The whole process aims to retain memorable images to form the authentication challenges. We integrate eye tracking data to select informative sequences of video frames and suggest another alternative method if an eye-facing camera is not available. To evaluate our system, we perform the experiments in different settings including object-interaction activities and traveling contexts. Even though our mechanism produces variable passwords, the log-in effort is comparable with approaches based on static challenges.
},
%doi={10.1109/PERCOMW.2016.7457119},
month={March},
project={passframe},
group = {ambience}}

Downloads: 0