Weighted Task Regularization for Multitask Learning. Liu, Y., Wu, A., Guo, D., Yao, K., & Raghavendra, C. S. In 2013 IEEE 13th International Conference on Data Mining Workshops, pages 399–406, December, 2013. ISSN: 2375-9259
doi  abstract   bibtex   
Multitask Learning has been proven to be more effective than the traditional single task learning on many real-world problems by simultaneously transferring knowledge among different tasks which may suffer from limited labeled data. However, in order to build a reliable multitask learning model, nontrivial effort to construct the relatedness between different tasks is critical. When the number of tasks is not large, the learning outcome may suffer if there exists outlier tasks that inappropriately bias majority. Rather than identifying or discarding such outlier tasks, we present a weighted regularized multitask learning framework based on regularized multitask learning, which uses statistical metrics, such as Kullback-Leibler divergence, to assign weights prior to regularization process that robustly reduces the impact of outlier tasks and results in better learned models for all tasks. We then show that this formulation can be solved using dual form like optimizing a standard support vector machine with varied kernels. We perform experiments using both synthetic dataset and real-world dataset from petroleum industry which shows that our methodology outperforms existing methods.
@inproceedings{liu_weighted_2013,
	title = {Weighted {Task} {Regularization} for {Multitask} {Learning}},
	doi = {10.1109/ICDMW.2013.158},
	abstract = {Multitask Learning has been proven to be more effective than the traditional single task learning on many real-world problems by simultaneously transferring knowledge among different tasks which may suffer from limited labeled data. However, in order to build a reliable multitask learning model, nontrivial effort to construct the relatedness between different tasks is critical. When the number of tasks is not large, the learning outcome may suffer if there exists outlier tasks that inappropriately bias majority. Rather than identifying or discarding such outlier tasks, we present a weighted regularized multitask learning framework based on regularized multitask learning, which uses statistical metrics, such as Kullback-Leibler divergence, to assign weights prior to regularization process that robustly reduces the impact of outlier tasks and results in better learned models for all tasks. We then show that this formulation can be solved using dual form like optimizing a standard support vector machine with varied kernels. We perform experiments using both synthetic dataset and real-world dataset from petroleum industry which shows that our methodology outperforms existing methods.},
	booktitle = {2013 {IEEE} 13th {International} {Conference} on {Data} {Mining} {Workshops}},
	author = {Liu, Yintao and Wu, Anqi and Guo, Dong and Yao, Ke-Thia and Raghavendra, Cauligi S.},
	month = dec,
	year = {2013},
	note = {ISSN: 2375-9259},
	keywords = {Kernel, support vector machines, Training, Accuracy, Optimization, Support vector machines, anomaly detection, Educational institutions, Equations, support vector machine, learning (artificial intelligence), data handling, knowledge transfer, Kullback-Leibler divergence, multitask learning, outlier task, regularization process, statistical metrics, svm, weighted regularization, weighted regularized multitask learning framework, weighted task regularization},
	pages = {399--406},
	file = {IEEE Xplore Abstract Record:C\:\\Users\\ktyao\\Zotero\\storage\\TY8LIMP4\\6753948.html:text/html},
}

Downloads: 0