Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and Their Implications. Zhou, K., Blodgett, S. L., Trischler, A., Daumé III, H., Suleman, K., & Olteanu, A. May, 2022. arXiv:2205.06828 [cs]
Paper abstract bibtex There are many ways to express similar things in text, which makes evaluating natural language generation (NLG) systems difficult. Compounding this difficulty is the need to assess varying quality criteria depending on the deployment setting. While the landscape of NLG evaluation has been well-mapped, practitioners’ goals, assumptions, and constraints—which inform decisions about what, when, and how to evaluate—are often partially or implicitly stated, or not stated at all. Combining a formative semi-structured interview study of NLG practitioners (N=18) with a survey study of a broader sample of practitioners (N=61), we surface goals, community practices, assumptions, and constraints that shape NLG evaluations, examining their implications and how they embody ethical considerations.
@misc{zhou_deconstructing_2022,
title = {Deconstructing {NLG} {Evaluation}: {Evaluation} {Practices}, {Assumptions}, and {Their} {Implications}},
shorttitle = {Deconstructing {NLG} {Evaluation}},
url = {http://arxiv.org/abs/2205.06828},
abstract = {There are many ways to express similar things in text, which makes evaluating natural language generation (NLG) systems difficult. Compounding this difficulty is the need to assess varying quality criteria depending on the deployment setting. While the landscape of NLG evaluation has been well-mapped, practitioners’ goals, assumptions, and constraints—which inform decisions about what, when, and how to evaluate—are often partially or implicitly stated, or not stated at all. Combining a formative semi-structured interview study of NLG practitioners (N=18) with a survey study of a broader sample of practitioners (N=61), we surface goals, community practices, assumptions, and constraints that shape NLG evaluations, examining their implications and how they embody ethical considerations.},
language = {en},
urldate = {2024-01-26},
publisher = {arXiv},
author = {Zhou, Kaitlyn and Blodgett, Su Lin and Trischler, Adam and Daumé III, Hal and Suleman, Kaheer and Olteanu, Alexandra},
month = may,
year = {2022},
note = {arXiv:2205.06828 [cs]},
keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language, NLP, fairness, gw\_abstracts},
}
Downloads: 0
{"_id":"GGRDf7XSyiEW3toDm","bibbaseid":"zhou-blodgett-trischler-daumiii-suleman-olteanu-deconstructingnlgevaluationevaluationpracticesassumptionsandtheirimplications-2022","author_short":["Zhou, K.","Blodgett, S. L.","Trischler, A.","Daumé III, H.","Suleman, K.","Olteanu, A."],"bibdata":{"bibtype":"misc","type":"misc","title":"Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and Their Implications","shorttitle":"Deconstructing NLG Evaluation","url":"http://arxiv.org/abs/2205.06828","abstract":"There are many ways to express similar things in text, which makes evaluating natural language generation (NLG) systems difficult. Compounding this difficulty is the need to assess varying quality criteria depending on the deployment setting. While the landscape of NLG evaluation has been well-mapped, practitioners’ goals, assumptions, and constraints—which inform decisions about what, when, and how to evaluate—are often partially or implicitly stated, or not stated at all. Combining a formative semi-structured interview study of NLG practitioners (N=18) with a survey study of a broader sample of practitioners (N=61), we surface goals, community practices, assumptions, and constraints that shape NLG evaluations, examining their implications and how they embody ethical considerations.","language":"en","urldate":"2024-01-26","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Zhou"],"firstnames":["Kaitlyn"],"suffixes":[]},{"propositions":[],"lastnames":["Blodgett"],"firstnames":["Su","Lin"],"suffixes":[]},{"propositions":[],"lastnames":["Trischler"],"firstnames":["Adam"],"suffixes":[]},{"propositions":[],"lastnames":["Daumé","III"],"firstnames":["Hal"],"suffixes":[]},{"propositions":[],"lastnames":["Suleman"],"firstnames":["Kaheer"],"suffixes":[]},{"propositions":[],"lastnames":["Olteanu"],"firstnames":["Alexandra"],"suffixes":[]}],"month":"May","year":"2022","note":"arXiv:2205.06828 [cs]","keywords":"Computer Science - Artificial Intelligence, Computer Science - Computation and Language, NLP, fairness, gw_abstracts","bibtex":"@misc{zhou_deconstructing_2022,\n\ttitle = {Deconstructing {NLG} {Evaluation}: {Evaluation} {Practices}, {Assumptions}, and {Their} {Implications}},\n\tshorttitle = {Deconstructing {NLG} {Evaluation}},\n\turl = {http://arxiv.org/abs/2205.06828},\n\tabstract = {There are many ways to express similar things in text, which makes evaluating natural language generation (NLG) systems difficult. Compounding this difficulty is the need to assess varying quality criteria depending on the deployment setting. While the landscape of NLG evaluation has been well-mapped, practitioners’ goals, assumptions, and constraints—which inform decisions about what, when, and how to evaluate—are often partially or implicitly stated, or not stated at all. Combining a formative semi-structured interview study of NLG practitioners (N=18) with a survey study of a broader sample of practitioners (N=61), we surface goals, community practices, assumptions, and constraints that shape NLG evaluations, examining their implications and how they embody ethical considerations.},\n\tlanguage = {en},\n\turldate = {2024-01-26},\n\tpublisher = {arXiv},\n\tauthor = {Zhou, Kaitlyn and Blodgett, Su Lin and Trischler, Adam and Daumé III, Hal and Suleman, Kaheer and Olteanu, Alexandra},\n\tmonth = may,\n\tyear = {2022},\n\tnote = {arXiv:2205.06828 [cs]},\n\tkeywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language, NLP, fairness, gw\\_abstracts},\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","author_short":["Zhou, K.","Blodgett, S. L.","Trischler, A.","Daumé III, H.","Suleman, K.","Olteanu, A."],"key":"zhou_deconstructing_2022","id":"zhou_deconstructing_2022","bibbaseid":"zhou-blodgett-trischler-daumiii-suleman-olteanu-deconstructingnlgevaluationevaluationpracticesassumptionsandtheirimplications-2022","role":"author","urls":{"Paper":"http://arxiv.org/abs/2205.06828"},"keyword":["Computer Science - Artificial Intelligence","Computer Science - Computation and Language","NLP","fairness","gw_abstracts"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"misc","biburl":"https://bibbase.org/zotero-group/dcambrid/5266609","dataSources":["e4qi3jRmPhPzc7C9a"],"keywords":["computer science - artificial intelligence","computer science - computation and language","nlp","fairness","gw_abstracts"],"search_terms":["deconstructing","nlg","evaluation","evaluation","practices","assumptions","implications","zhou","blodgett","trischler","daumé iii","suleman","olteanu"],"title":"Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and Their Implications","year":2022}