Calibrating Structured Output Predictors for Natural Language Processing. Jagannatha, A. & Yu, H. In 2020 Annual Conference of the Association for Computational Linguistics (ACL), volume Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2078–2092, July, 2020. NIHMSID: NIHMS1661932Paper doi abstract bibtex We address the problem of calibrating prediction confidence for output entities of interest in natural language processing (NLP) applications. It is important that NLP applications such as named entity recognition and question answering produce calibrated confidence scores for their predictions, especially if the system is to be deployed in a safety-critical domain such as healthcare. However, the output space of such structured prediction models is often too large to adapt binary or multi-class calibration methods directly. In this study, we propose a general calibration scheme for output entities of interest in neural-network based structured prediction models. Our proposed method can be used with any binary class calibration scheme and a neural network model. Additionally, we show that our calibration method can also be used as an uncertainty-aware, entity-specific decoding step to improve the performance of the underlying model at no additional training cost or data requirements. We show that our method outperforms current calibration techniques for named-entity-recognition, part-of-speech and question answering. We also improve our model's performance from our decoding step across several tasks and benchmark datasets. Our method improves the calibration and model performance on out-of-domain test scenarios as well.
@inproceedings{jagannatha_calibrating_2020,
title = {Calibrating {Structured} {Output} {Predictors} for {Natural} {Language} {Processing}.},
volume = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
url = {https://aclanthology.org/2020.acl-main.188},
doi = {10.18653/v1/2020.acl-main.188},
abstract = {We address the problem of calibrating prediction confidence for output entities of interest in natural language processing (NLP) applications. It is important that NLP applications such as named entity recognition and question answering produce calibrated confidence scores for their predictions, especially if the system is to be deployed in a safety-critical domain such as healthcare. However, the output space of such structured prediction models is often too large to adapt binary or multi-class calibration methods directly. In this study, we propose a general calibration scheme for output entities of interest in neural-network based structured prediction models. Our proposed method can be used with any binary class calibration scheme and a neural network model. Additionally, we show that our calibration method can also be used as an uncertainty-aware, entity-specific decoding step to improve the performance of the underlying model at no additional training cost or data requirements. We show that our method outperforms current calibration techniques for named-entity-recognition, part-of-speech and question answering. We also improve our model's performance from our decoding step across several tasks and benchmark datasets. Our method improves the calibration and model performance on out-of-domain test scenarios as well.},
booktitle = {2020 {Annual} {Conference} of the {Association} for {Computational} {Linguistics} ({ACL})},
author = {Jagannatha, Abhyuday and Yu, Hong},
month = jul,
year = {2020},
pmcid = {PMC7890517},
pmid = {33612961},
note = {NIHMSID: NIHMS1661932},
pages = {2078--2092},
}
Downloads: 0
{"_id":"ZK9v25nqgpokJuv4H","bibbaseid":"jagannatha-yu-calibratingstructuredoutputpredictorsfornaturallanguageprocessing-2020","author_short":["Jagannatha, A.","Yu, H."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"Calibrating Structured Output Predictors for Natural Language Processing.","volume":"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics","url":"https://aclanthology.org/2020.acl-main.188","doi":"10.18653/v1/2020.acl-main.188","abstract":"We address the problem of calibrating prediction confidence for output entities of interest in natural language processing (NLP) applications. It is important that NLP applications such as named entity recognition and question answering produce calibrated confidence scores for their predictions, especially if the system is to be deployed in a safety-critical domain such as healthcare. However, the output space of such structured prediction models is often too large to adapt binary or multi-class calibration methods directly. In this study, we propose a general calibration scheme for output entities of interest in neural-network based structured prediction models. Our proposed method can be used with any binary class calibration scheme and a neural network model. Additionally, we show that our calibration method can also be used as an uncertainty-aware, entity-specific decoding step to improve the performance of the underlying model at no additional training cost or data requirements. We show that our method outperforms current calibration techniques for named-entity-recognition, part-of-speech and question answering. We also improve our model's performance from our decoding step across several tasks and benchmark datasets. Our method improves the calibration and model performance on out-of-domain test scenarios as well.","booktitle":"2020 Annual Conference of the Association for Computational Linguistics (ACL)","author":[{"propositions":[],"lastnames":["Jagannatha"],"firstnames":["Abhyuday"],"suffixes":[]},{"propositions":[],"lastnames":["Yu"],"firstnames":["Hong"],"suffixes":[]}],"month":"July","year":"2020","pmcid":"PMC7890517","pmid":"33612961","note":"NIHMSID: NIHMS1661932","pages":"2078–2092","bibtex":"@inproceedings{jagannatha_calibrating_2020,\n\ttitle = {Calibrating {Structured} {Output} {Predictors} for {Natural} {Language} {Processing}.},\n\tvolume = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},\n\turl = {https://aclanthology.org/2020.acl-main.188},\n\tdoi = {10.18653/v1/2020.acl-main.188},\n\tabstract = {We address the problem of calibrating prediction confidence for output entities of interest in natural language processing (NLP) applications. It is important that NLP applications such as named entity recognition and question answering produce calibrated confidence scores for their predictions, especially if the system is to be deployed in a safety-critical domain such as healthcare. However, the output space of such structured prediction models is often too large to adapt binary or multi-class calibration methods directly. In this study, we propose a general calibration scheme for output entities of interest in neural-network based structured prediction models. Our proposed method can be used with any binary class calibration scheme and a neural network model. Additionally, we show that our calibration method can also be used as an uncertainty-aware, entity-specific decoding step to improve the performance of the underlying model at no additional training cost or data requirements. We show that our method outperforms current calibration techniques for named-entity-recognition, part-of-speech and question answering. We also improve our model's performance from our decoding step across several tasks and benchmark datasets. Our method improves the calibration and model performance on out-of-domain test scenarios as well.},\n\tbooktitle = {2020 {Annual} {Conference} of the {Association} for {Computational} {Linguistics} ({ACL})},\n\tauthor = {Jagannatha, Abhyuday and Yu, Hong},\n\tmonth = jul,\n\tyear = {2020},\n\tpmcid = {PMC7890517},\n\tpmid = {33612961},\n\tnote = {NIHMSID: NIHMS1661932},\n\tpages = {2078--2092},\n}\n\n","author_short":["Jagannatha, A.","Yu, H."],"key":"jagannatha_calibrating_2020","id":"jagannatha_calibrating_2020","bibbaseid":"jagannatha-yu-calibratingstructuredoutputpredictorsfornaturallanguageprocessing-2020","role":"author","urls":{"Paper":"https://aclanthology.org/2020.acl-main.188"},"metadata":{"authorlinks":{}},"html":""},"bibtype":"inproceedings","biburl":"http://fenway.cs.uml.edu/papers/pubs-all.bib","dataSources":["TqaA9miSB65nRfS5H"],"keywords":[],"search_terms":["calibrating","structured","output","predictors","natural","language","processing","jagannatha","yu"],"title":"Calibrating Structured Output Predictors for Natural Language Processing.","year":2020}