Language Models are Unsupervised Multitask Learners. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. In 2019. 🏷️ /unreadPaper abstract bibtex Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations. 【摘要翻译】自然语言处理任务,如问题解答、机器翻译、阅读理解和摘要,通常是在特定任务数据集上通过监督学习来完成的。我们的研究表明,当语言模型在由数百万网页组成的新数据集 WebText 上进行训练时,无需任何明确的监督即可开始学习这些任务。当以文档加问题为条件时,语言模型生成的答案在 CoQA 数据集上达到 55 F1,在不使用 127,000 多个训练示例的情况下,与 4 个基准系统中 3 个系统的性能相当或超过。语言模型的容量对于零镜头任务转移的成功至关重要,增加语言模型的容量可以以对数线性的方式提高不同任务的性能。我们最大的模型 GPT-2 是一个 1.5B 参数的转换器,它在零点测试环境下的 8 个测试语言建模数据集中的 7 个数据集上取得了最先进的结果,但对 WebText 的适应性仍然较差。该模型的样本反映了这些改进,并包含连贯的文本段落。这些发现为构建语言处理系统指明了一条大有可为的道路,该系统可从自然出现的演示中学习执行任务。
@inproceedings{radford2019,
title = {Language {Models} are {Unsupervised} {Multitask} {Learners}},
shorttitle = {语言模型是无监督的多任务学习者},
url = {https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe},
abstract = {Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
【摘要翻译】自然语言处理任务,如问题解答、机器翻译、阅读理解和摘要,通常是在特定任务数据集上通过监督学习来完成的。我们的研究表明,当语言模型在由数百万网页组成的新数据集 WebText 上进行训练时,无需任何明确的监督即可开始学习这些任务。当以文档加问题为条件时,语言模型生成的答案在 CoQA 数据集上达到 55 F1,在不使用 127,000 多个训练示例的情况下,与 4 个基准系统中 3 个系统的性能相当或超过。语言模型的容量对于零镜头任务转移的成功至关重要,增加语言模型的容量可以以对数线性的方式提高不同任务的性能。我们最大的模型 GPT-2 是一个 1.5B 参数的转换器,它在零点测试环境下的 8 个测试语言建模数据集中的 7 个数据集上取得了最先进的结果,但对 WebText 的适应性仍然较差。该模型的样本反映了这些改进,并包含连贯的文本段落。这些发现为构建语言处理系统指明了一条大有可为的道路,该系统可从自然出现的演示中学习执行任务。},
language = {en},
urldate = {2023-02-02},
author = {Radford, Alec and Wu, Jeff and Child, Rewon and Luan, D. and Amodei, Dario and Sutskever, Ilya},
year = {2019},
note = {🏷️ /unread},
keywords = {/unread},
}
Downloads: 0
{"_id":"AM8kKCLoKt8CogCJ3","bibbaseid":"radford-wu-child-luan-amodei-sutskever-languagemodelsareunsupervisedmultitasklearners-2019","authorIDs":[],"author_short":["Radford, A.","Wu, J.","Child, R.","Luan, D.","Amodei, D.","Sutskever, I."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"Language Models are Unsupervised Multitask Learners","shorttitle":"语言模型是无监督的多任务学习者","url":"https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe","abstract":"Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations. 【摘要翻译】自然语言处理任务,如问题解答、机器翻译、阅读理解和摘要,通常是在特定任务数据集上通过监督学习来完成的。我们的研究表明,当语言模型在由数百万网页组成的新数据集 WebText 上进行训练时,无需任何明确的监督即可开始学习这些任务。当以文档加问题为条件时,语言模型生成的答案在 CoQA 数据集上达到 55 F1,在不使用 127,000 多个训练示例的情况下,与 4 个基准系统中 3 个系统的性能相当或超过。语言模型的容量对于零镜头任务转移的成功至关重要,增加语言模型的容量可以以对数线性的方式提高不同任务的性能。我们最大的模型 GPT-2 是一个 1.5B 参数的转换器,它在零点测试环境下的 8 个测试语言建模数据集中的 7 个数据集上取得了最先进的结果,但对 WebText 的适应性仍然较差。该模型的样本反映了这些改进,并包含连贯的文本段落。这些发现为构建语言处理系统指明了一条大有可为的道路,该系统可从自然出现的演示中学习执行任务。","language":"en","urldate":"2023-02-02","author":[{"propositions":[],"lastnames":["Radford"],"firstnames":["Alec"],"suffixes":[]},{"propositions":[],"lastnames":["Wu"],"firstnames":["Jeff"],"suffixes":[]},{"propositions":[],"lastnames":["Child"],"firstnames":["Rewon"],"suffixes":[]},{"propositions":[],"lastnames":["Luan"],"firstnames":["D."],"suffixes":[]},{"propositions":[],"lastnames":["Amodei"],"firstnames":["Dario"],"suffixes":[]},{"propositions":[],"lastnames":["Sutskever"],"firstnames":["Ilya"],"suffixes":[]}],"year":"2019","note":"🏷️ /unread","keywords":"/unread","bibtex":"@inproceedings{radford2019,\n\ttitle = {Language {Models} are {Unsupervised} {Multitask} {Learners}},\n\tshorttitle = {语言模型是无监督的多任务学习者},\n\turl = {https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe},\n\tabstract = {Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.\n\n【摘要翻译】自然语言处理任务,如问题解答、机器翻译、阅读理解和摘要,通常是在特定任务数据集上通过监督学习来完成的。我们的研究表明,当语言模型在由数百万网页组成的新数据集 WebText 上进行训练时,无需任何明确的监督即可开始学习这些任务。当以文档加问题为条件时,语言模型生成的答案在 CoQA 数据集上达到 55 F1,在不使用 127,000 多个训练示例的情况下,与 4 个基准系统中 3 个系统的性能相当或超过。语言模型的容量对于零镜头任务转移的成功至关重要,增加语言模型的容量可以以对数线性的方式提高不同任务的性能。我们最大的模型 GPT-2 是一个 1.5B 参数的转换器,它在零点测试环境下的 8 个测试语言建模数据集中的 7 个数据集上取得了最先进的结果,但对 WebText 的适应性仍然较差。该模型的样本反映了这些改进,并包含连贯的文本段落。这些发现为构建语言处理系统指明了一条大有可为的道路,该系统可从自然出现的演示中学习执行任务。},\n\tlanguage = {en},\n\turldate = {2023-02-02},\n\tauthor = {Radford, Alec and Wu, Jeff and Child, Rewon and Luan, D. and Amodei, Dario and Sutskever, Ilya},\n\tyear = {2019},\n\tnote = {🏷️ /unread},\n\tkeywords = {/unread},\n}\n\n","author_short":["Radford, A.","Wu, J.","Child, R.","Luan, D.","Amodei, D.","Sutskever, I."],"key":"radford2019","id":"radford2019","bibbaseid":"radford-wu-child-luan-amodei-sutskever-languagemodelsareunsupervisedmultitasklearners-2019","role":"author","urls":{"Paper":"https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe"},"keyword":["/unread"],"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"inproceedings","biburl":"https://api.zotero.org/groups/2386895/collections/7PPRTB2H/items?format=bibtex&limit=100","creationDate":"2020-01-06T20:16:55.588Z","downloads":0,"keywords":["/unread"],"search_terms":["language","models","unsupervised","multitask","learners","radford","wu","child","luan","amodei","sutskever"],"title":"Language Models are Unsupervised Multitask Learners","year":2019,"dataSources":["okYcdTpf4JJ2zkj7A","u8q5uny4m5jJL9RcX","Wsv2bQ4jPuc7qme8R","znj7izS5PeehdLR3G"]}