\n
\n\n \n \n \n \n \n \n Learning to translate with products of novices: a suite of open-ended challenge problems for teaching MT.\n \n \n \n \n\n\n \n Lopez, A.; Post, M.; Callison-Burch, C.; Weese, J.; Ganitkevitch, J.; Ahmidi, N.; Buzek, O.; Hanson, L.; Jamil, B.; Lee, M.; Lin, Y.; Pao, H.; Rivera, F.; Shahriyari, L.; Sinha, D.; Teichert, A.; Wampler, S.; Weinberger, M.; Xu, D.; Yang, L.; and Zhao, S.\n\n\n \n\n\n\n
Transactions of the Association for Computational Linguistics, 1: 165–178. 2013.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{lopez-etal-2013-learning,\n abstract = {Machine translation (MT) draws from several different disciplines, making it a complex subject to teach. There are excellent pedagogical texts, but problems in MT and current algorithms for solving them are best learned by doing. As a centerpiece of our MT course, we devised a series of open-ended challenges for students in which the goal was to improve performance on carefully constrained instances of four key MT tasks: alignment, decoding, evaluation, and reranking. Students brought a diverse set of techniques to the problems, including some novel solutions which performed remarkably well. A surprising and exciting outcome was that student solutions or their combinations fared competitively on some tasks, demonstrating that even newcomers to the field can help improve the state-of-the-art on hard NLP problems while simultaneously learning a great deal. The problems, baseline code, and results are freely available.},\n author = {Lopez, Adam and\n Post, Matt and\n Callison-Burch, Chris and\n Weese, Jonathan and\n Ganitkevitch, Juri and\n Ahmidi, Narges and\n Buzek, Olivia and\n Hanson, Leah and\n Jamil, Beenish and\n Lee, Matthias and\n Lin, Ya-Ting and\n Pao, Henry and\n Rivera, Fatima and\n Shahriyari, Leili and\n Sinha, Debu and\n Teichert, Adam and\n Wampler, Stephen and\n Weinberger, Michael and\n Xu, Daguang and\n Yang, Lin and\n Zhao, Shang},\n doi = {10.1162/tacl_a_00218},\n journal = {Transactions of the Association for Computational Linguistics},\n pages = {165--178},\n title = {Learning to translate with products of novices: a suite of open-ended challenge problems for teaching {MT}},\n url = {https://www.aclweb.org/anthology/Q13-1014},\n volume = {1},\n year = {2013},\n}\n\n
\n
\n\n\n
\n Machine translation (MT) draws from several different disciplines, making it a complex subject to teach. There are excellent pedagogical texts, but problems in MT and current algorithms for solving them are best learned by doing. As a centerpiece of our MT course, we devised a series of open-ended challenges for students in which the goal was to improve performance on carefully constrained instances of four key MT tasks: alignment, decoding, evaluation, and reranking. Students brought a diverse set of techniques to the problems, including some novel solutions which performed remarkably well. A surprising and exciting outcome was that student solutions or their combinations fared competitively on some tasks, demonstrating that even newcomers to the field can help improve the state-of-the-art on hard NLP problems while simultaneously learning a great deal. The problems, baseline code, and results are freely available.\n
\n\n\n