Dual Decomposition for Parsing with Non-Projective Head Automata. Koo, T.; Rush, A. M.; Collins, M.; Jaakkola, T.; and Sontag, D. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1288-1298, 2010.
Dual Decomposition for Parsing with Non-Projective Head Automata [pdf]Paper  abstract   bibtex   
This paper introduces algorithms for non-projective parsing based on dual decomposition. We focus on parsing algorithms for non-projective head automata, a generalization of head-automata models to non-projective structures. The dual decomposition algorithms are simple and efficient, relying on standard dynamic programming and minimum spanning tree algorithms. They provably solve an LP relaxation of the non-projective parsing problem. Empirically the LP relaxation is very often tight: for many languages, exact solutions are achieved on over 98% of test sentences. The accuracy of our models is higher than previous work on a broad range of datasets.
@inproceedings{KooEtAl_emnlp10,
 author = {Terry Koo and Alexander M. Rush and Michael Collins and Tommi Jaakkola and David Sontag},
 title = {Dual Decomposition for Parsing with Non-Projective Head Automata},
 booktitle = {Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
 pages = {1288-1298},
 year = {2010},
 keywords = {Machine learning, Natural language processing, Approximate inference in graphical models},
 url_Paper = {http://people.csail.mit.edu/dsontag/papers/KooEtAl_emnlp10.pdf},
 abstract = {This paper introduces algorithms for non-projective parsing based on dual decomposition. We focus on parsing algorithms for non-projective head automata, a generalization of head-automata models to non-projective structures. The dual decomposition algorithms are simple and efficient, relying on standard dynamic programming and minimum spanning tree algorithms. They provably solve an LP relaxation of the non-projective parsing problem. Empirically the LP relaxation is very often tight: for many languages, exact solutions are achieved on over 98\% of test sentences. The accuracy of our models is higher than previous work on a broad range of datasets.}
}
Downloads: 0