VRank: Voting System on Ranking Model for Human Age Estimation. Lim, T., Hua, K., Wang, H., Zhao, K., Hu, M., Cheng, W., & IEEE In 2015.
abstract   bibtex   
Ranking algorithms have proven the potential for human age estimation. Currently, a common paradigm is to compare the input face with reference faces of known age to generate a ranking relation whereby the first-rank reference is exploited for labeling the input face. In this paper, we proposed a framework to improve upon the typical ranking model, called Voting system on Ranking model (VRank), by leveraging relational information (comparative relations, i.e. if the input face is younger or older than each of the references) to make a more robust estimation. Our approach has several advantages: firstly, comparative relations can be explicitly involved to benefit the estimation task; secondly, few incorrect comparisons will not influence much the accuracy of the result, making this approach more robust than the conventional approach; finally, we propose to incorporate the deep learning architecture for training, which extracts robust facial features for increasing the effectiveness of classification. In comparison to the best results from the state-of-the-art methods, the VRank showed a significant outperformance on all the benchmarks, with a relative improvement of 5.74% similar to 69.45% (FG-NET), 19.09% similar to 68.71% (MORPH), and 0.55% similar to 17.73% (IoG).
@inproceedings{lim_vrank_2015,
	title = {{VRank}: {Voting} {System} on {Ranking} {Model} for {Human} {Age} {Estimation}},
	isbn = {2163-3517},
	abstract = {Ranking algorithms have proven the potential for human age estimation. Currently, a common paradigm is to compare the input face with reference faces of known age to generate a ranking relation whereby the first-rank reference is exploited for labeling the input face. In this paper, we proposed a framework to improve upon the typical ranking model, called Voting system on Ranking model (VRank), by leveraging relational information (comparative relations, i.e. if the input face is younger or older than each of the references) to make a more robust estimation. Our approach has several advantages: firstly, comparative relations can be explicitly involved to benefit the estimation task; secondly, few incorrect comparisons will not influence much the accuracy of the result, making this approach more robust than the conventional approach; finally, we propose to incorporate the deep learning architecture for training, which extracts robust facial features for increasing the effectiveness of classification. In comparison to the best results from the state-of-the-art methods, the VRank showed a significant outperformance on all the benchmarks, with a relative improvement of 5.74\% similar to 69.45\% (FG-NET), 19.09\% similar to 68.71\% (MORPH), and 0.55\% similar to 17.73\% (IoG).},
	author = {Lim, Tekoing and Hua, Kai-Lung and Wang, Hong-Cyuan and Zhao, Kai-Wen and Hu, Min-Chun and Cheng, Wen-Huang and {IEEE}},
	year = {2015},
}

Downloads: 0