Continual Dialogue State Tracking via Example-Guided Question Answering. Cho, H., Madotto, A., Lin, Z., Chandu, K., Kottur, S., Xu, J., May, J., & Sankar, C. In Bouamor, H., Pino, J., & Bali, K., editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 3873–3886, Singapore, December, 2023. Association for Computational Linguistics. Paper doi abstract bibtex 1 download Dialogue systems are frequently updated to accommodate new services, but naively updating them by continually training with data for new services in diminishing performance on previously learnt services. Motivated by the insight that dialogue state tracking (DST), a crucial component of dialogue systems that estimates the user's goal as a conversation proceeds, is a simple natural language understanding task, we propose reformulating it as a bundle of granular example-guided question answering tasks to minimize the task shift between services and thus benefit continual learning. Our approach alleviates service-specific memorization and teaches a model to contextualize the given question and example to extract the necessary information from the conversation. We find that a model with just 60M parameters can achieve a significant boost by learning to learn from in-context examples retrieved by a retriever trained to identify turns with similar dialogue state changes. Combining our method with dialogue-level memory replay, our approach attains state of the art performance on DST continual learning metrics without relying on any complex regularization or parameter expansion methods.
@inproceedings{cho-etal-2023-continual,
title = "Continual Dialogue State Tracking via Example-Guided Question Answering",
author = "Cho, Hyundong and
Madotto, Andrea and
Lin, Zhaojiang and
Chandu, Khyathi and
Kottur, Satwik and
Xu, Jing and
May, Jonathan and
Sankar, Chinnadhurai",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.235",
doi = "10.18653/v1/2023.emnlp-main.235",
pages = "3873--3886",
abstract = "Dialogue systems are frequently updated to accommodate new services, but naively updating them by continually training with data for new services in diminishing performance on previously learnt services. Motivated by the insight that dialogue state tracking (DST), a crucial component of dialogue systems that estimates the user{'}s goal as a conversation proceeds, is a simple natural language understanding task, we propose reformulating it as a bundle of granular example-guided question answering tasks to minimize the task shift between services and thus benefit continual learning. Our approach alleviates service-specific memorization and teaches a model to contextualize the given question and example to extract the necessary information from the conversation. We find that a model with just 60M parameters can achieve a significant boost by learning to learn from in-context examples retrieved by a retriever trained to identify turns with similar dialogue state changes. Combining our method with dialogue-level memory replay, our approach attains state of the art performance on DST continual learning metrics without relying on any complex regularization or parameter expansion methods.",
}
Downloads: 1
{"_id":"FfHmf4amepTe4TkGL","bibbaseid":"cho-madotto-lin-chandu-kottur-xu-may-sankar-continualdialoguestatetrackingviaexampleguidedquestionanswering-2023","author_short":["Cho, H.","Madotto, A.","Lin, Z.","Chandu, K.","Kottur, S.","Xu, J.","May, J.","Sankar, C."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"Continual Dialogue State Tracking via Example-Guided Question Answering","author":[{"propositions":[],"lastnames":["Cho"],"firstnames":["Hyundong"],"suffixes":[]},{"propositions":[],"lastnames":["Madotto"],"firstnames":["Andrea"],"suffixes":[]},{"propositions":[],"lastnames":["Lin"],"firstnames":["Zhaojiang"],"suffixes":[]},{"propositions":[],"lastnames":["Chandu"],"firstnames":["Khyathi"],"suffixes":[]},{"propositions":[],"lastnames":["Kottur"],"firstnames":["Satwik"],"suffixes":[]},{"propositions":[],"lastnames":["Xu"],"firstnames":["Jing"],"suffixes":[]},{"propositions":[],"lastnames":["May"],"firstnames":["Jonathan"],"suffixes":[]},{"propositions":[],"lastnames":["Sankar"],"firstnames":["Chinnadhurai"],"suffixes":[]}],"editor":[{"propositions":[],"lastnames":["Bouamor"],"firstnames":["Houda"],"suffixes":[]},{"propositions":[],"lastnames":["Pino"],"firstnames":["Juan"],"suffixes":[]},{"propositions":[],"lastnames":["Bali"],"firstnames":["Kalika"],"suffixes":[]}],"booktitle":"Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing","month":"December","year":"2023","address":"Singapore","publisher":"Association for Computational Linguistics","url":"https://aclanthology.org/2023.emnlp-main.235","doi":"10.18653/v1/2023.emnlp-main.235","pages":"3873–3886","abstract":"Dialogue systems are frequently updated to accommodate new services, but naively updating them by continually training with data for new services in diminishing performance on previously learnt services. Motivated by the insight that dialogue state tracking (DST), a crucial component of dialogue systems that estimates the user's goal as a conversation proceeds, is a simple natural language understanding task, we propose reformulating it as a bundle of granular example-guided question answering tasks to minimize the task shift between services and thus benefit continual learning. Our approach alleviates service-specific memorization and teaches a model to contextualize the given question and example to extract the necessary information from the conversation. We find that a model with just 60M parameters can achieve a significant boost by learning to learn from in-context examples retrieved by a retriever trained to identify turns with similar dialogue state changes. Combining our method with dialogue-level memory replay, our approach attains state of the art performance on DST continual learning metrics without relying on any complex regularization or parameter expansion methods.","bibtex":"@inproceedings{cho-etal-2023-continual,\n title = \"Continual Dialogue State Tracking via Example-Guided Question Answering\",\n author = \"Cho, Hyundong and\n Madotto, Andrea and\n Lin, Zhaojiang and\n Chandu, Khyathi and\n Kottur, Satwik and\n Xu, Jing and\n May, Jonathan and\n Sankar, Chinnadhurai\",\n editor = \"Bouamor, Houda and\n Pino, Juan and\n Bali, Kalika\",\n booktitle = \"Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing\",\n month = dec,\n year = \"2023\",\n address = \"Singapore\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/2023.emnlp-main.235\",\n doi = \"10.18653/v1/2023.emnlp-main.235\",\n pages = \"3873--3886\",\n abstract = \"Dialogue systems are frequently updated to accommodate new services, but naively updating them by continually training with data for new services in diminishing performance on previously learnt services. Motivated by the insight that dialogue state tracking (DST), a crucial component of dialogue systems that estimates the user{'}s goal as a conversation proceeds, is a simple natural language understanding task, we propose reformulating it as a bundle of granular example-guided question answering tasks to minimize the task shift between services and thus benefit continual learning. Our approach alleviates service-specific memorization and teaches a model to contextualize the given question and example to extract the necessary information from the conversation. We find that a model with just 60M parameters can achieve a significant boost by learning to learn from in-context examples retrieved by a retriever trained to identify turns with similar dialogue state changes. Combining our method with dialogue-level memory replay, our approach attains state of the art performance on DST continual learning metrics without relying on any complex regularization or parameter expansion methods.\",\n}\n\n","author_short":["Cho, H.","Madotto, A.","Lin, Z.","Chandu, K.","Kottur, S.","Xu, J.","May, J.","Sankar, C."],"editor_short":["Bouamor, H.","Pino, J.","Bali, K."],"key":"cho-etal-2023-continual","id":"cho-etal-2023-continual","bibbaseid":"cho-madotto-lin-chandu-kottur-xu-may-sankar-continualdialoguestatetrackingviaexampleguidedquestionanswering-2023","role":"author","urls":{"Paper":"https://aclanthology.org/2023.emnlp-main.235"},"metadata":{"authorlinks":{}},"downloads":1},"bibtype":"inproceedings","biburl":"https://jonmay.github.io/webpage/cutelabname/cutelabname.bib","dataSources":["j3Qzx9HAAC6WtJDHS","5eM3sAccSEpjSDHHQ"],"keywords":[],"search_terms":["continual","dialogue","state","tracking","via","example","guided","question","answering","cho","madotto","lin","chandu","kottur","xu","may","sankar"],"title":"Continual Dialogue State Tracking via Example-Guided Question Answering","year":2023,"downloads":1}