Learning to Generalize for Sequential Decision Making. Yin, X., Weischedel, R., & May, J. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3046–3063, Online, November, 2020. Association for Computational Linguistics. Paper abstract bibtex We consider problems of making sequences of decisions to accomplish tasks, interacting via the medium of language. These problems are often tackled with reinforcement learning approaches. We find that these models do not generalize well when applied to novel task domains. However, the large amount of computation necessary to adequately train and explore the search space of sequential decision making, under a reinforcement learning paradigm, precludes the inclusion of large contextualized language models, which might otherwise enable the desired generalization ability. We introduce a teacher-student imitation learning methodology and a means of converting a reinforcement learning model into a natural language understanding model. Together, these methodologies enable the introduction of contextualized language models into the sequential decision making problem space. We show that models can learn faster and generalize more, leveraging both the imitation learning and the reformulation. Our models exceed teacher performance on various held-out decision problems, by up to 7% on in-domain problems and 24% on out-of-domain problems.
@inproceedings{yin-etal-2020-learning,
title = "Learning to Generalize for Sequential Decision Making",
author = "Yin, Xusen and
Weischedel, Ralph and
May, Jonathan",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.273",
pages = "3046--3063",
abstract = "We consider problems of making sequences of decisions to accomplish tasks, interacting via the medium of language. These problems are often tackled with reinforcement learning approaches. We find that these models do not generalize well when applied to novel task domains. However, the large amount of computation necessary to adequately train and explore the search space of sequential decision making, under a reinforcement learning paradigm, precludes the inclusion of large contextualized language models, which might otherwise enable the desired generalization ability. We introduce a teacher-student imitation learning methodology and a means of converting a reinforcement learning model into a natural language understanding model. Together, these methodologies enable the introduction of contextualized language models into the sequential decision making problem space. We show that models can learn faster and generalize more, leveraging both the imitation learning and the reformulation. Our models exceed teacher performance on various held-out decision problems, by up to 7{\%} on in-domain problems and 24{\%} on out-of-domain problems.",
}
Downloads: 0
{"_id":"bdsWdr6NsnsreJab3","bibbaseid":"yin-weischedel-may-learningtogeneralizeforsequentialdecisionmaking-2020","author_short":["Yin, X.","Weischedel, R.","May, J."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"Learning to Generalize for Sequential Decision Making","author":[{"propositions":[],"lastnames":["Yin"],"firstnames":["Xusen"],"suffixes":[]},{"propositions":[],"lastnames":["Weischedel"],"firstnames":["Ralph"],"suffixes":[]},{"propositions":[],"lastnames":["May"],"firstnames":["Jonathan"],"suffixes":[]}],"booktitle":"Findings of the Association for Computational Linguistics: EMNLP 2020","month":"November","year":"2020","address":"Online","publisher":"Association for Computational Linguistics","url":"https://www.aclweb.org/anthology/2020.findings-emnlp.273","pages":"3046–3063","abstract":"We consider problems of making sequences of decisions to accomplish tasks, interacting via the medium of language. These problems are often tackled with reinforcement learning approaches. We find that these models do not generalize well when applied to novel task domains. However, the large amount of computation necessary to adequately train and explore the search space of sequential decision making, under a reinforcement learning paradigm, precludes the inclusion of large contextualized language models, which might otherwise enable the desired generalization ability. We introduce a teacher-student imitation learning methodology and a means of converting a reinforcement learning model into a natural language understanding model. Together, these methodologies enable the introduction of contextualized language models into the sequential decision making problem space. We show that models can learn faster and generalize more, leveraging both the imitation learning and the reformulation. Our models exceed teacher performance on various held-out decision problems, by up to 7% on in-domain problems and 24% on out-of-domain problems.","bibtex":"@inproceedings{yin-etal-2020-learning,\n title = \"Learning to Generalize for Sequential Decision Making\",\n author = \"Yin, Xusen and\n Weischedel, Ralph and\n May, Jonathan\",\n booktitle = \"Findings of the Association for Computational Linguistics: EMNLP 2020\",\n month = nov,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.findings-emnlp.273\",\n pages = \"3046--3063\",\n abstract = \"We consider problems of making sequences of decisions to accomplish tasks, interacting via the medium of language. These problems are often tackled with reinforcement learning approaches. We find that these models do not generalize well when applied to novel task domains. However, the large amount of computation necessary to adequately train and explore the search space of sequential decision making, under a reinforcement learning paradigm, precludes the inclusion of large contextualized language models, which might otherwise enable the desired generalization ability. We introduce a teacher-student imitation learning methodology and a means of converting a reinforcement learning model into a natural language understanding model. Together, these methodologies enable the introduction of contextualized language models into the sequential decision making problem space. We show that models can learn faster and generalize more, leveraging both the imitation learning and the reformulation. Our models exceed teacher performance on various held-out decision problems, by up to 7{\\%} on in-domain problems and 24{\\%} on out-of-domain problems.\",\n}\n\n","author_short":["Yin, X.","Weischedel, R.","May, J."],"key":"yin-etal-2020-learning","id":"yin-etal-2020-learning","bibbaseid":"yin-weischedel-may-learningtogeneralizeforsequentialdecisionmaking-2020","role":"author","urls":{"Paper":"https://www.aclweb.org/anthology/2020.findings-emnlp.273"},"metadata":{"authorlinks":{}}},"bibtype":"inproceedings","biburl":"https://jonmay.github.io/webpage/cutelabname/cutelabname.bib","dataSources":["ZdhKtP2cSp3Aki2ge","X5WBAKQabka5TW5z7","BnZgtH7HDESgbxKxt","hbZSwot2msWk92m5B","fcWjcoAgajPvXWcp7","GvHfaAWP6AfN6oLQE","j3Qzx9HAAC6WtJDHS","5eM3sAccSEpjSDHHQ"],"keywords":[],"search_terms":["learning","generalize","sequential","decision","making","yin","weischedel","may"],"title":"Learning to Generalize for Sequential Decision Making","year":2020}