Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. July, 2021. arXiv:2107.13586 [cs]Paper doi abstract bibtex This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning". Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y\textbarx), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x' that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: it allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this paper we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g.the choice of pre-trained models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts, but also release other resources, e.g., a website http://pretrain.nlpedia.ai/ including constantly-updated survey, and paperlist.
@misc{liu_pre-train_2021,
title = {Pre-train, {Prompt}, and {Predict}: {A} {Systematic} {Survey} of {Prompting} {Methods} in {Natural} {Language} {Processing}},
shorttitle = {Pre-train, {Prompt}, and {Predict}},
url = {http://arxiv.org/abs/2107.13586},
doi = {10.48550/arXiv.2107.13586},
abstract = {This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning". Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y{\textbar}x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x' that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: it allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this paper we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g.the choice of pre-trained models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts, but also release other resources, e.g., a website http://pretrain.nlpedia.ai/ including constantly-updated survey, and paperlist.},
urldate = {2023-09-24},
publisher = {arXiv},
author = {Liu, Pengfei and Yuan, Weizhe and Fu, Jinlan and Jiang, Zhengbao and Hayashi, Hiroaki and Neubig, Graham},
month = jul,
year = {2021},
note = {arXiv:2107.13586 [cs]},
keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Machine Learning},
}
Downloads: 0
{"_id":"aTtg5ohQjxLtbPFgd","bibbaseid":"liu-yuan-fu-jiang-hayashi-neubig-pretrainpromptandpredictasystematicsurveyofpromptingmethodsinnaturallanguageprocessing-2021","author_short":["Liu, P.","Yuan, W.","Fu, J.","Jiang, Z.","Hayashi, H.","Neubig, G."],"bibdata":{"bibtype":"misc","type":"misc","title":"Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing","shorttitle":"Pre-train, Prompt, and Predict","url":"http://arxiv.org/abs/2107.13586","doi":"10.48550/arXiv.2107.13586","abstract":"This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub \"prompt-based learning\". Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y\\textbarx), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x' that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: it allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this paper we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g.the choice of pre-trained models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts, but also release other resources, e.g., a website http://pretrain.nlpedia.ai/ including constantly-updated survey, and paperlist.","urldate":"2023-09-24","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Liu"],"firstnames":["Pengfei"],"suffixes":[]},{"propositions":[],"lastnames":["Yuan"],"firstnames":["Weizhe"],"suffixes":[]},{"propositions":[],"lastnames":["Fu"],"firstnames":["Jinlan"],"suffixes":[]},{"propositions":[],"lastnames":["Jiang"],"firstnames":["Zhengbao"],"suffixes":[]},{"propositions":[],"lastnames":["Hayashi"],"firstnames":["Hiroaki"],"suffixes":[]},{"propositions":[],"lastnames":["Neubig"],"firstnames":["Graham"],"suffixes":[]}],"month":"July","year":"2021","note":"arXiv:2107.13586 [cs]","keywords":"Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Machine Learning","bibtex":"@misc{liu_pre-train_2021,\n\ttitle = {Pre-train, {Prompt}, and {Predict}: {A} {Systematic} {Survey} of {Prompting} {Methods} in {Natural} {Language} {Processing}},\n\tshorttitle = {Pre-train, {Prompt}, and {Predict}},\n\turl = {http://arxiv.org/abs/2107.13586},\n\tdoi = {10.48550/arXiv.2107.13586},\n\tabstract = {This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub \"prompt-based learning\". Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y{\\textbar}x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x' that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: it allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this paper we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g.the choice of pre-trained models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts, but also release other resources, e.g., a website http://pretrain.nlpedia.ai/ including constantly-updated survey, and paperlist.},\n\turldate = {2023-09-24},\n\tpublisher = {arXiv},\n\tauthor = {Liu, Pengfei and Yuan, Weizhe and Fu, Jinlan and Jiang, Zhengbao and Hayashi, Hiroaki and Neubig, Graham},\n\tmonth = jul,\n\tyear = {2021},\n\tnote = {arXiv:2107.13586 [cs]},\n\tkeywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Machine Learning},\n}\n\n","author_short":["Liu, P.","Yuan, W.","Fu, J.","Jiang, Z.","Hayashi, H.","Neubig, G."],"key":"liu_pre-train_2021","id":"liu_pre-train_2021","bibbaseid":"liu-yuan-fu-jiang-hayashi-neubig-pretrainpromptandpredictasystematicsurveyofpromptingmethodsinnaturallanguageprocessing-2021","role":"author","urls":{"Paper":"http://arxiv.org/abs/2107.13586"},"keyword":["Computer Science - Artificial Intelligence","Computer Science - Computation and Language","Computer Science - Machine Learning"],"metadata":{"authorlinks":{}}},"bibtype":"misc","biburl":"https://api.zotero.org/groups/5201095/items?key=7qHIy4DrYxGHcP2M7pZXOL4Q&format=bibtex&limit=100","dataSources":["SYqcwHeTFu9kg8TXh","sxFXNFjFDR2W85aTk"],"keywords":["computer science - artificial intelligence","computer science - computation and language","computer science - machine learning"],"search_terms":["pre","train","prompt","predict","systematic","survey","prompting","methods","natural","language","processing","liu","yuan","fu","jiang","hayashi","neubig"],"title":"Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing","year":2021}