Learning to Rank Scientific Documents from the Crowd. Lingeman, J. M & Yu, H. arXiv:1611.01400, November, 2016. Paper abstract bibtex Finding related published articles is an important task in any science, but with the explosion of new work in the biomedical domain it has become especially challenging. Most existing methodologies use text similarity metrics to identify whether two articles are related or not. However biomedical knowledge discovery is hypothesis-driven. The most related articles may not be ones with the highest text similarities. In this study, we first develop an innovative crowd-sourcing approach to build an expert-annotated document-ranking corpus. Using this corpus as the gold standard, we then evaluate the approaches of using text similarity to rank the relatedness of articles. Finally, we develop and evaluate a new supervised model to automatically rank related scientific articles. Our results show that authors' ranking differ significantly from rankings by text-similarity-based models. By training a learning-to-rank model on a subset of the annotated corpus, we found the best supervised learning-to-rank model (SVM-Rank) significantly surpassed state-of-the-art baseline systems.
@article{lingeman_learning_2016,
title = {Learning to {Rank} {Scientific} {Documents} from the {Crowd}},
url = {https://arxiv.org/pdf/1611.01400v1.pdf},
abstract = {Finding related published articles is an important task in any science, but with the explosion of new work in the biomedical domain it has become especially challenging. Most existing methodologies use text similarity metrics to identify whether two articles are related or not. However biomedical knowledge discovery is hypothesis-driven. The most related articles may not be ones with the highest text similarities. In this study, we first develop an innovative crowd-sourcing approach to build an expert-annotated document-ranking corpus. Using this corpus as the gold standard, we then evaluate the approaches of using text similarity to rank the relatedness of articles. Finally, we develop and evaluate a new supervised model to automatically rank related scientific articles. Our results show that authors' ranking differ significantly from rankings by text-similarity-based models. By training a learning-to-rank model on a subset of the annotated corpus, we found the best supervised learning-to-rank model (SVM-Rank) significantly surpassed state-of-the-art baseline systems.},
journal = {arXiv:1611.01400},
author = {Lingeman, Jesse M and Yu, Hong},
month = nov,
year = {2016},
}
Downloads: 0
{"_id":"bdwhvqH6ZKHnpLTsS","bibbaseid":"lingeman-yu-learningtorankscientificdocumentsfromthecrowd-2016","author_short":["Lingeman, J. M","Yu, H."],"bibdata":{"bibtype":"article","type":"article","title":"Learning to Rank Scientific Documents from the Crowd","url":"https://arxiv.org/pdf/1611.01400v1.pdf","abstract":"Finding related published articles is an important task in any science, but with the explosion of new work in the biomedical domain it has become especially challenging. Most existing methodologies use text similarity metrics to identify whether two articles are related or not. However biomedical knowledge discovery is hypothesis-driven. The most related articles may not be ones with the highest text similarities. In this study, we first develop an innovative crowd-sourcing approach to build an expert-annotated document-ranking corpus. Using this corpus as the gold standard, we then evaluate the approaches of using text similarity to rank the relatedness of articles. Finally, we develop and evaluate a new supervised model to automatically rank related scientific articles. Our results show that authors' ranking differ significantly from rankings by text-similarity-based models. By training a learning-to-rank model on a subset of the annotated corpus, we found the best supervised learning-to-rank model (SVM-Rank) significantly surpassed state-of-the-art baseline systems.","journal":"arXiv:1611.01400","author":[{"propositions":[],"lastnames":["Lingeman"],"firstnames":["Jesse","M"],"suffixes":[]},{"propositions":[],"lastnames":["Yu"],"firstnames":["Hong"],"suffixes":[]}],"month":"November","year":"2016","bibtex":"@article{lingeman_learning_2016,\n\ttitle = {Learning to {Rank} {Scientific} {Documents} from the {Crowd}},\n\turl = {https://arxiv.org/pdf/1611.01400v1.pdf},\n\tabstract = {Finding related published articles is an important task in any science, but with the explosion of new work in the biomedical domain it has become especially challenging. Most existing methodologies use text similarity metrics to identify whether two articles are related or not. However biomedical knowledge discovery is hypothesis-driven. The most related articles may not be ones with the highest text similarities. In this study, we first develop an innovative crowd-sourcing approach to build an expert-annotated document-ranking corpus. Using this corpus as the gold standard, we then evaluate the approaches of using text similarity to rank the relatedness of articles. Finally, we develop and evaluate a new supervised model to automatically rank related scientific articles. Our results show that authors' ranking differ significantly from rankings by text-similarity-based models. By training a learning-to-rank model on a subset of the annotated corpus, we found the best supervised learning-to-rank model (SVM-Rank) significantly surpassed state-of-the-art baseline systems.},\n\tjournal = {arXiv:1611.01400},\n\tauthor = {Lingeman, Jesse M and Yu, Hong},\n\tmonth = nov,\n\tyear = {2016},\n}\n\n","author_short":["Lingeman, J. M","Yu, H."],"key":"lingeman_learning_2016","id":"lingeman_learning_2016","bibbaseid":"lingeman-yu-learningtorankscientificdocumentsfromthecrowd-2016","role":"author","urls":{"Paper":"https://arxiv.org/pdf/1611.01400v1.pdf"},"metadata":{"authorlinks":{}},"html":""},"bibtype":"article","biburl":"http://fenway.cs.uml.edu/papers/pubs-all.bib","dataSources":["TqaA9miSB65nRfS5H"],"keywords":[],"search_terms":["learning","rank","scientific","documents","crowd","lingeman","yu"],"title":"Learning to Rank Scientific Documents from the Crowd","year":2016}