Support Vector Learning for Semantic Argument Classification. Pradhan, S., Hacioglu, K., Krugler, V., Ward, W., Martin, J., H., & Jurafsky, D. Machine Learning, 60(1-3):11-39, Kluwer Academic Publishers, 2005.
Support Vector Learning for Semantic Argument Classification [link]Website  abstract   bibtex   
The natural language processing community has recently experienced a growth of interest in domain independent shallow semantic parsing—the process of assigning a WHO did WHAT to WHOM, WHEN, WHERE, WHY, HOW etc. structure to plain text. This process entails identifying groups of words in a sentence that represent these semantic arguments and assigning specific labels to them. It could play a key role in NLP tasks like Information Extraction, Question Answering and Summarization. We propose a machine learning algorithm for semantic role parsing, extending the work of Gildea and Jurafsky (2002), Surdeanu et al. (2003) and others. Our algorithm is based on Support Vector Machines which we show give large improvement in performance over earlier classifiers. We show performance improvements through a number of new features designed to improve generalization to unseen data, such as automatic clustering of verbs. We also report on various analytic studies examining which features are most important, comparing our classifier to other machine learning algorithms in the literature, and testing its generalization to new test set from different genre. On the task of assigning semantic labels to the PropBank (Kingsbury, Palmer, & Marcus, 2002) corpus, our final system has a precision of 84% and a recall of 75%, which are the best results currently reported for this task. Finally, we explore a completely different architecture which does not requires a deep syntactic parse. We reformulate the task as a combined chunking and classification problem, thus allowing our algorithm to be applied to new languages or genres of text for which statistical syntactic parsers may not be available.
@article{
 title = {Support Vector Learning for Semantic Argument Classification},
 type = {article},
 year = {2005},
 identifiers = {[object Object]},
 keywords = {shallow semantic parsing,support vector machines},
 pages = {11-39},
 volume = {60},
 websites = {http://www.springerlink.com/index/10.1007/s10994-005-0912-2},
 publisher = {Kluwer Academic Publishers},
 id = {3f2e69fe-be3c-3908-818b-761cd813a399},
 created = {2012-04-01T16:32:49.000Z},
 file_attached = {false},
 profile_id = {5284e6aa-156c-3ce5-bc0e-b80cf09f3ef6},
 group_id = {066b42c8-f712-3fc3-abb2-225c158d2704},
 last_modified = {2017-03-14T14:36:19.698Z},
 tags = {semantic role labeling},
 read = {false},
 starred = {false},
 authored = {false},
 confirmed = {true},
 hidden = {false},
 citation_key = {Pradhan2005b},
 private_publication = {false},
 abstract = {The natural language processing community has recently experienced a growth of interest in domain independent shallow semantic parsing—the process of assigning a WHO did WHAT to WHOM, WHEN, WHERE, WHY, HOW etc. structure to plain text. This process entails identifying groups of words in a sentence that represent these semantic arguments and assigning specific labels to them. It could play a key role in NLP tasks like Information Extraction, Question Answering and Summarization. We propose a machine learning algorithm for semantic role parsing, extending the work of Gildea and Jurafsky (2002), Surdeanu et al. (2003) and others. Our algorithm is based on Support Vector Machines which we show give large improvement in performance over earlier classifiers. We show performance improvements through a number of new features designed to improve generalization to unseen data, such as automatic clustering of verbs. We also report on various analytic studies examining which features are most important, comparing our classifier to other machine learning algorithms in the literature, and testing its generalization to new test set from different genre. On the task of assigning semantic labels to the PropBank (Kingsbury, Palmer, & Marcus, 2002) corpus, our final system has a precision of 84% and a recall of 75%, which are the best results currently reported for this task. Finally, we explore a completely different architecture which does not requires a deep syntactic parse. We reformulate the task as a combined chunking and classification problem, thus allowing our algorithm to be applied to new languages or genres of text for which statistical syntactic parsers may not be available.},
 bibtype = {article},
 author = {Pradhan, Sameer and Hacioglu, Kadri and Krugler, Valerie and Ward, Wayne and Martin, James H and Jurafsky, Daniel},
 journal = {Machine Learning},
 number = {1-3}
}

Downloads: 0