No that's not what I meant: Handling Third Position Repair in Conversational Question Answering. Balaraman, V., Eshghi, A., Konstas, I., & Papaioannou, I. In Stoyanchev, S., Joty, S., Schlangen, D., Dusek, O., Kennington, C., & Alikhani, M., editors, Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 562-571, 9, 2023. Association for Computational Linguistics.
No that's not what I meant: Handling Third Position Repair in Conversational Question Answering [link]Paper  doi  abstract   bibtex   
The ability to handle miscommunication is crucial to robust and faithful conversational AI. People usually deal with miscommunication immediately as they detect it, using highly systematic interactional mechanisms called repair. One important type of repair is Third Position Repair (TPR) whereby a speaker is initially misunderstood but then corrects the misunderstanding as it becomes apparent after the addressee's erroneous response. Here, we collect and publicly release REPAIR-QA, the first large dataset of TPRs in a conversational question answering (QA) setting. The data is comprised of the TPR turns, corresponding dialogue contexts, and candidate repairs of the original turn for execution of TPRs. We demonstrate the usefulness of the data by training and evaluating strong baseline models for executing TPRs. For stand-alone TPR execution, we perform both automatic and human evaluations on a fine-tuned T5 model, as well as OpenAI's GPT-3 LLMs. Additionally, we extrinsically evaluate the LLMs' TPR processing capabilities in the downstream conversational QA task. The results indicate poor out-of-the-box performance on TPR's by the GPT-3 models, which then significantly improves when exposed to REPAIR-QA.
@inproceedings{Balaraman.etal23,
   abstract = {The ability to handle miscommunication is crucial to robust and faithful conversational AI. People usually deal with miscommunication immediately as they detect it, using highly systematic interactional mechanisms called repair. One important type of repair is Third Position Repair (TPR) whereby a speaker is initially misunderstood but then corrects the misunderstanding as it becomes apparent after the addressee's erroneous response. Here, we collect and publicly release REPAIR-QA, the first large dataset of TPRs in a conversational question answering (QA) setting. The data is comprised of the TPR turns, corresponding dialogue contexts, and candidate repairs of the original turn for execution of TPRs. We demonstrate the usefulness of the data by training and evaluating strong baseline models for executing TPRs. For stand-alone TPR execution, we perform both automatic and human evaluations on a fine-tuned T5 model, as well as OpenAI's GPT-3 LLMs. Additionally, we extrinsically evaluate the LLMs' TPR processing capabilities in the downstream conversational QA task. The results indicate poor out-of-the-box performance on TPR's by the GPT-3 models, which then significantly improves when exposed to REPAIR-QA.},
   author = {Vevake Balaraman and Arash Eshghi and Ioannis Konstas and Ioannis Papaioannou},
   city = {Prague, Czechia},
   doi = {10.18653/v1/2023.sigdial-1.52},
   editor = {Svetlana Stoyanchev and Shafiq Joty and David Schlangen and Ondrej Dusek and Casey Kennington and Malihe Alikhani},
   booktitle = {Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue},
   month = {9},
   pages = {562-571},
   publisher = {Association for Computational Linguistics},
   title = {No that's not what I meant: Handling Third Position Repair in Conversational Question Answering},
   url = {https://aclanthology.org/2023.sigdial-1.52},
   year = {2023},
}

Downloads: 0