A survey on multi-view learning. Xu, C., Tao, D., & Xu, C. April, 2013. arXiv:1304.5634 [cs]
Paper abstract bibtex In recent years, a great many methods of learning from multi-view data by considering the diversity of different views have been proposed. These views may be obtained from multiple sources or different feature subsets. For example, a person can be identified by face, fingerprint, signature or iris with information obtained from multiple sources, while an image can be represented by its color or texture features, which can be seen as different feature subsets of the image. In trying to organize and highlight similarities and differences between the variety of multi-view learning approaches, we review a number of representative multi-view learning algorithms in different areas and classify them into three groups: 1) co-training, 2) multiple kernel learning, and 3) subspace learning. Notably, co-training style algorithms train alternately to maximize the mutual agreement on two distinct views of the data; multiple kernel learning algorithms exploit kernels that naturally correspond to different views and combine kernels either linearly or non-linearly to improve learning performance; and subspace learning algorithms aim to obtain a latent subspace shared by multiple views by assuming that the input views are generated from this latent subspace. Though there is significant variance in the approaches to integrating multiple views to improve learning performance, they mainly exploit either the consensus principle or the complementary principle to ensure the success of multi-view learning. Since accessing multiple views is the fundament of multi-view learning, with the exception of study on learning a model from multiple views, it is also valuable to study how to construct multiple views and how to evaluate these views. Overall, by exploring the consistency and complementary properties of different views, multi-view learning is rendered more effective, more promising, and has better generalization ability than single-view learning.
@misc{xu_survey_2013,
title = {A survey on multi-view learning},
url = {http://arxiv.org/abs/1304.5634},
abstract = {In recent years, a great many methods of learning from multi-view data by considering the diversity of different views have been proposed. These views may be obtained from multiple sources or different feature subsets. For example, a person can be identified by face, fingerprint, signature or iris with information obtained from multiple sources, while an image can be represented by its color or texture features, which can be seen as different feature subsets of the image. In trying to organize and highlight similarities and differences between the variety of multi-view learning approaches, we review a number of representative multi-view learning algorithms in different areas and classify them into three groups: 1) co-training, 2) multiple kernel learning, and 3) subspace learning. Notably, co-training style algorithms train alternately to maximize the mutual agreement on two distinct views of the data; multiple kernel learning algorithms exploit kernels that naturally correspond to different views and combine kernels either linearly or non-linearly to improve learning performance; and subspace learning algorithms aim to obtain a latent subspace shared by multiple views by assuming that the input views are generated from this latent subspace. Though there is significant variance in the approaches to integrating multiple views to improve learning performance, they mainly exploit either the consensus principle or the complementary principle to ensure the success of multi-view learning. Since accessing multiple views is the fundament of multi-view learning, with the exception of study on learning a model from multiple views, it is also valuable to study how to construct multiple views and how to evaluate these views. Overall, by exploring the consistency and complementary properties of different views, multi-view learning is rendered more effective, more promising, and has better generalization ability than single-view learning.},
language = {en},
urldate = {2024-01-07},
publisher = {arXiv},
author = {Xu, Chang and Tao, Dacheng and Xu, Chao},
month = apr,
year = {2013},
note = {arXiv:1304.5634 [cs]},
keywords = {/unread, Computer Science - Machine Learning},
}
Downloads: 0
{"_id":"nKK6XZPozvdZe9Lqg","bibbaseid":"xu-tao-xu-asurveyonmultiviewlearning-2013","downloads":0,"creationDate":"2018-01-22T16:01:12.358Z","title":"A survey on multi-view learning","author_short":["Xu, C.","Tao, D.","Xu, C."],"year":2013,"bibtype":"misc","biburl":"https://bibbase.org/zotero/victorjhu","bibdata":{"bibtype":"misc","type":"misc","title":"A survey on multi-view learning","url":"http://arxiv.org/abs/1304.5634","abstract":"In recent years, a great many methods of learning from multi-view data by considering the diversity of different views have been proposed. These views may be obtained from multiple sources or different feature subsets. For example, a person can be identified by face, fingerprint, signature or iris with information obtained from multiple sources, while an image can be represented by its color or texture features, which can be seen as different feature subsets of the image. In trying to organize and highlight similarities and differences between the variety of multi-view learning approaches, we review a number of representative multi-view learning algorithms in different areas and classify them into three groups: 1) co-training, 2) multiple kernel learning, and 3) subspace learning. Notably, co-training style algorithms train alternately to maximize the mutual agreement on two distinct views of the data; multiple kernel learning algorithms exploit kernels that naturally correspond to different views and combine kernels either linearly or non-linearly to improve learning performance; and subspace learning algorithms aim to obtain a latent subspace shared by multiple views by assuming that the input views are generated from this latent subspace. Though there is significant variance in the approaches to integrating multiple views to improve learning performance, they mainly exploit either the consensus principle or the complementary principle to ensure the success of multi-view learning. Since accessing multiple views is the fundament of multi-view learning, with the exception of study on learning a model from multiple views, it is also valuable to study how to construct multiple views and how to evaluate these views. Overall, by exploring the consistency and complementary properties of different views, multi-view learning is rendered more effective, more promising, and has better generalization ability than single-view learning.","language":"en","urldate":"2024-01-07","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Xu"],"firstnames":["Chang"],"suffixes":[]},{"propositions":[],"lastnames":["Tao"],"firstnames":["Dacheng"],"suffixes":[]},{"propositions":[],"lastnames":["Xu"],"firstnames":["Chao"],"suffixes":[]}],"month":"April","year":"2013","note":"arXiv:1304.5634 [cs]","keywords":"/unread, Computer Science - Machine Learning","bibtex":"@misc{xu_survey_2013,\n\ttitle = {A survey on multi-view learning},\n\turl = {http://arxiv.org/abs/1304.5634},\n\tabstract = {In recent years, a great many methods of learning from multi-view data by considering the diversity of different views have been proposed. These views may be obtained from multiple sources or different feature subsets. For example, a person can be identified by face, fingerprint, signature or iris with information obtained from multiple sources, while an image can be represented by its color or texture features, which can be seen as different feature subsets of the image. In trying to organize and highlight similarities and differences between the variety of multi-view learning approaches, we review a number of representative multi-view learning algorithms in different areas and classify them into three groups: 1) co-training, 2) multiple kernel learning, and 3) subspace learning. Notably, co-training style algorithms train alternately to maximize the mutual agreement on two distinct views of the data; multiple kernel learning algorithms exploit kernels that naturally correspond to different views and combine kernels either linearly or non-linearly to improve learning performance; and subspace learning algorithms aim to obtain a latent subspace shared by multiple views by assuming that the input views are generated from this latent subspace. Though there is significant variance in the approaches to integrating multiple views to improve learning performance, they mainly exploit either the consensus principle or the complementary principle to ensure the success of multi-view learning. Since accessing multiple views is the fundament of multi-view learning, with the exception of study on learning a model from multiple views, it is also valuable to study how to construct multiple views and how to evaluate these views. Overall, by exploring the consistency and complementary properties of different views, multi-view learning is rendered more effective, more promising, and has better generalization ability than single-view learning.},\n\tlanguage = {en},\n\turldate = {2024-01-07},\n\tpublisher = {arXiv},\n\tauthor = {Xu, Chang and Tao, Dacheng and Xu, Chao},\n\tmonth = apr,\n\tyear = {2013},\n\tnote = {arXiv:1304.5634 [cs]},\n\tkeywords = {/unread, Computer Science - Machine Learning},\n}\n\n","author_short":["Xu, C.","Tao, D.","Xu, C."],"key":"xu_survey_2013","id":"xu_survey_2013","bibbaseid":"xu-tao-xu-asurveyonmultiviewlearning-2013","role":"author","urls":{"Paper":"http://arxiv.org/abs/1304.5634"},"keyword":["/unread","Computer Science - Machine Learning"],"metadata":{"authorlinks":{}},"html":""},"search_terms":["survey","multi","view","learning","xu","tao","xu"],"keywords":["/unread","computer science - machine learning"],"authorIDs":[],"dataSources":["9cexBw6hrwgyZphZZ","CmHEoydhafhbkXXt5"]}