Higher-order Lexical Semantic Models for Non-factoid Answer Reranking. Fried, D., Jansen, P., Hahn-Powell, G., Surdeanu, M., & Clark, P. Transactions of the Association for Computational Linguistics, 3:197-210, 2015. Paper abstract bibtex Lexical semantic models provide robust performance for question answering, but, in general, can only capitalize on direct evidence seen during training. For example, monolingual alignment models acquire term alignment probabilities from semi-structured data such as question-answer pairs; neural network language models learn term embeddings from unstructured text. All this knowledge is then used to estimate the semantic similarity between question and answer candidates. We introduce a higher-order formalism that allows all these lexical semantic models to chain direct evidence to construct indirect associations between question and answer texts, by casting the task as the traversal of graphs that encode direct term associations. Using a corpus of 10,000 questions from Yahoo! Answers, we experimentally demonstrate that higher-order methods are broadly applicable to alignment and language models, across both word and syntactic representations. We show that an important criterion for success is controlling for the semantic drift that accumulates during graph traversal. All in all, the proposed higher-order approach improves five out of the six lexical semantic models investigated, with relative gains of up to +13% over their first-order variants.
@article{Fried:2015,
author = {Daniel Fried and Peter Jansen and Gustave Hahn-Powell and Mihai
Surdeanu and Peter Clark},
title = {Higher-order Lexical Semantic Models for Non-factoid Answer
Reranking},
journal = {Transactions of the Association for Computational Linguistics},
volume = {3},
year = {2015},
keywords = {},
abstract = {Lexical semantic models provide robust performance for question
answering, but, in general, can only capitalize on direct evidence seen
during training. For example, monolingual alignment models acquire term
alignment probabilities from semi-structured data such as question-answer
pairs; neural network language models learn term embeddings from
unstructured text. All this knowledge is then used to estimate the semantic
similarity between question and answer candidates. We introduce a
higher-order formalism that allows all these lexical semantic models to
chain direct evidence to construct indirect associations between question
and answer texts, by casting the task as the traversal of graphs that encode
direct term associations. Using a corpus of 10,000 questions from Yahoo!
Answers, we experimentally demonstrate that higher-order methods are broadly
applicable to alignment and language models, across both word and syntactic
representations. We show that an important criterion for success is
controlling for the semantic drift that accumulates during graph traversal.
All in all, the proposed higher-order approach improves five out of the six
lexical semantic models investigated, with relative gains of up to +13\%
over their first-order variants. },
issn = {2307-387X},
url =
{https://tacl2013.cs.columbia.edu/ojs/index.php/tacl/article/view/550},
pages = {197-210}
}
Downloads: 0
{"_id":"DwSskwK2iYoJpSRSS","bibbaseid":"fried-jansen-hahnpowell-surdeanu-clark-higherorderlexicalsemanticmodelsfornonfactoidanswerreranking-2015","downloads":3,"creationDate":"2017-02-24T17:08:49.576Z","title":"Higher-order Lexical Semantic Models for Non-factoid Answer Reranking","author_short":["Fried, D.","Jansen, P.","Hahn-Powell, G.","Surdeanu, M.","Clark, P."],"year":2015,"bibtype":"article","biburl":"https://clulab.github.io/clulab_publications.bib","bibdata":{"bibtype":"article","type":"article","author":[{"firstnames":["Daniel"],"propositions":[],"lastnames":["Fried"],"suffixes":[]},{"firstnames":["Peter"],"propositions":[],"lastnames":["Jansen"],"suffixes":[]},{"firstnames":["Gustave"],"propositions":[],"lastnames":["Hahn-Powell"],"suffixes":[]},{"firstnames":["Mihai"],"propositions":[],"lastnames":["Surdeanu"],"suffixes":[]},{"firstnames":["Peter"],"propositions":[],"lastnames":["Clark"],"suffixes":[]}],"title":"Higher-order Lexical Semantic Models for Non-factoid Answer Reranking","journal":"Transactions of the Association for Computational Linguistics","volume":"3","year":"2015","keywords":"","abstract":"Lexical semantic models provide robust performance for question answering, but, in general, can only capitalize on direct evidence seen during training. For example, monolingual alignment models acquire term alignment probabilities from semi-structured data such as question-answer pairs; neural network language models learn term embeddings from unstructured text. All this knowledge is then used to estimate the semantic similarity between question and answer candidates. We introduce a higher-order formalism that allows all these lexical semantic models to chain direct evidence to construct indirect associations between question and answer texts, by casting the task as the traversal of graphs that encode direct term associations. Using a corpus of 10,000 questions from Yahoo! Answers, we experimentally demonstrate that higher-order methods are broadly applicable to alignment and language models, across both word and syntactic representations. We show that an important criterion for success is controlling for the semantic drift that accumulates during graph traversal. All in all, the proposed higher-order approach improves five out of the six lexical semantic models investigated, with relative gains of up to +13% over their first-order variants. ","issn":"2307-387X","url":"https://tacl2013.cs.columbia.edu/ojs/index.php/tacl/article/view/550","pages":"197-210","bibtex":"@article{Fried:2015,\n author = {Daniel Fried and Peter Jansen and Gustave Hahn-Powell and Mihai\nSurdeanu and Peter Clark},\n title = {Higher-order Lexical Semantic Models for Non-factoid Answer\nReranking},\n journal = {Transactions of the Association for Computational Linguistics},\n volume = {3},\n year = {2015},\n keywords = {},\n abstract = {Lexical semantic models provide robust performance for question\nanswering, but, in general, can only capitalize on direct evidence seen\nduring training. For example, monolingual alignment models acquire term\nalignment probabilities from semi-structured data such as question-answer\npairs; neural network language models learn term embeddings from\nunstructured text. All this knowledge is then used to estimate the semantic\nsimilarity between question and answer candidates. We introduce a\nhigher-order formalism that allows all these lexical semantic models to\nchain direct evidence to construct indirect associations between question\nand answer texts, by casting the task as the traversal of graphs that encode\ndirect term associations. Using a corpus of 10,000 questions from Yahoo!\nAnswers, we experimentally demonstrate that higher-order methods are broadly\napplicable to alignment and language models, across both word and syntactic\nrepresentations. We show that an important criterion for success is\ncontrolling for the semantic drift that accumulates during graph traversal.\nAll in all, the proposed higher-order approach improves five out of the six\nlexical semantic models investigated, with relative gains of up to +13\\%\nover their first-order variants. },\n issn = {2307-387X},\n url =\n{https://tacl2013.cs.columbia.edu/ojs/index.php/tacl/article/view/550},\n pages = {197-210}\n}\n","author_short":["Fried, D.","Jansen, P.","Hahn-Powell, G.","Surdeanu, M.","Clark, P."],"key":"Fried:2015","id":"Fried:2015","bibbaseid":"fried-jansen-hahnpowell-surdeanu-clark-higherorderlexicalsemanticmodelsfornonfactoidanswerreranking-2015","role":"author","urls":{"Paper":"https://tacl2013.cs.columbia.edu/ojs/index.php/tacl/article/view/550"},"metadata":{"authorlinks":{}}},"search_terms":["higher","order","lexical","semantic","models","non","factoid","answer","reranking","fried","jansen","hahn-powell","surdeanu","clark"],"keywords":[],"authorIDs":[],"dataSources":["Z8RWFZH5Fm67zCFX3","m5m3npBwJE44C9JYs"]}