Know Thy Strengths: Comprehensive Dialogue State Tracking Diagnostics. Cho, H., Sankar, C., Lin, C., Sadagopan, K., Shayandeh, S., Celikyilmaz, A., May, J., & Beirami, A. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5345–5359, Abu Dhabi, United Arab Emirates, December, 2022. Association for Computational Linguistics.
Know Thy Strengths: Comprehensive Dialogue State Tracking Diagnostics [link]Paper  abstract   bibtex   
Recent works that revealed the vulnerability of dialogue state tracking (DST) models to distributional shifts have made holistic comparisons on robustness and qualitative analyses increasingly important for understanding their relative performance. We present our findings from standardized and comprehensive DST diagnoses, which have previously been sparse and uncoordinated, using our toolkit, CheckDST, a collection of robustness tests and failure mode analytics. We discover that different classes of DST models have clear strengths and weaknesses, where generation models are more promising for handling language variety while span-based classification models are more robust to unseen entities. Prompted by this discovery, we also compare checkpoints from the same model and find that the standard practice of selecting checkpoints using validation loss/accuracy is prone to overfitting and each model class has distinct patterns of failure. Lastly, we demonstrate how our diagnoses motivate a pre-finetuning procedure with non-dialogue data that offers comprehensive improvements to generation models by alleviating the impact of distributional shifts through transfer learning.
@inproceedings{cho-etal-2022-know,
    title = "Know Thy Strengths: Comprehensive Dialogue State Tracking Diagnostics",
    author = "Cho, Hyundong  and
      Sankar, Chinnadhurai  and
      Lin, Christopher  and
      Sadagopan, Kaushik  and
      Shayandeh, Shahin  and
      Celikyilmaz, Asli  and
      May, Jonathan  and
      Beirami, Ahmad",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.findings-emnlp.391",
    pages = "5345--5359",
    abstract = "Recent works that revealed the vulnerability of dialogue state tracking (DST) models to distributional shifts have made holistic comparisons on robustness and qualitative analyses increasingly important for understanding their relative performance. We present our findings from standardized and comprehensive DST diagnoses, which have previously been sparse and uncoordinated, using our toolkit, CheckDST, a collection of robustness tests and failure mode analytics. We discover that different classes of DST models have clear strengths and weaknesses, where generation models are more promising for handling language variety while span-based classification models are more robust to unseen entities. Prompted by this discovery, we also compare checkpoints from the same model and find that the standard practice of selecting checkpoints using validation loss/accuracy is prone to overfitting and each model class has distinct patterns of failure. Lastly, we demonstrate how our diagnoses motivate a pre-finetuning procedure with non-dialogue data that offers comprehensive improvements to generation models by alleviating the impact of distributional shifts through transfer learning.",
}

Downloads: 0