AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators. He, X., Lin, Z., Gong, Y., Jin, A., Zhang, H., Lin, C., Jiao, J., Yiu, S. M., Duan, N., & Chen, W. April, 2024. arXiv:2303.16854 [cs]Paper doi abstract bibtex Many natural language processing (NLP) tasks rely on labeled data to train machine learning models with high performance. However, data annotation is time-consuming and expensive, especially when the task involves a large amount of data or requires specialized domains. Recently, GPT-3.5 series models have demonstrated remarkable few-shot and zero-shot ability across various NLP tasks. In this paper, we first claim that large language models (LLMs), such as GPT-3.5, can serve as an excellent crowdsourced annotator when provided with sufficient guidance and demonstrated examples. Accordingly, we propose AnnoLLM, an annotation system powered by LLMs, which adopts a two-step approach, explain-then-annotate. Concretely, we first prompt LLMs to provide explanations for why the specific ground truth answer/label was assigned for a given example. Then, we construct the few-shot chain-of-thought prompt with the self-generated explanation and employ it to annotate the unlabeled data with LLMs. Our experiment results on three tasks, including user input and keyword relevance assessment, BoolQ, and WiC, demonstrate that AnnoLLM surpasses or performs on par with crowdsourced annotators. Furthermore, we build the first conversation-based information retrieval dataset employing AnnoLLM. This dataset is designed to facilitate the development of retrieval models capable of retrieving pertinent documents for conversational text. Human evaluation has validated the dataset's high quality.
@misc{he_annollm_2024,
title = {{AnnoLLM}: {Making} {Large} {Language} {Models} to {Be} {Better} {Crowdsourced} {Annotators}},
shorttitle = {{AnnoLLM}},
url = {http://arxiv.org/abs/2303.16854},
doi = {10.48550/arXiv.2303.16854},
abstract = {Many natural language processing (NLP) tasks rely on labeled data to train machine learning models with high performance. However, data annotation is time-consuming and expensive, especially when the task involves a large amount of data or requires specialized domains. Recently, GPT-3.5 series models have demonstrated remarkable few-shot and zero-shot ability across various NLP tasks. In this paper, we first claim that large language models (LLMs), such as GPT-3.5, can serve as an excellent crowdsourced annotator when provided with sufficient guidance and demonstrated examples. Accordingly, we propose AnnoLLM, an annotation system powered by LLMs, which adopts a two-step approach, explain-then-annotate. Concretely, we first prompt LLMs to provide explanations for why the specific ground truth answer/label was assigned for a given example. Then, we construct the few-shot chain-of-thought prompt with the self-generated explanation and employ it to annotate the unlabeled data with LLMs. Our experiment results on three tasks, including user input and keyword relevance assessment, BoolQ, and WiC, demonstrate that AnnoLLM surpasses or performs on par with crowdsourced annotators. Furthermore, we build the first conversation-based information retrieval dataset employing AnnoLLM. This dataset is designed to facilitate the development of retrieval models capable of retrieving pertinent documents for conversational text. Human evaluation has validated the dataset's high quality.},
urldate = {2024-04-24},
publisher = {arXiv},
author = {He, Xingwei and Lin, Zhenghao and Gong, Yeyun and Jin, A.-Long and Zhang, Hang and Lin, Chen and Jiao, Jian and Yiu, Siu Ming and Duan, Nan and Chen, Weizhu},
month = apr,
year = {2024},
note = {arXiv:2303.16854 [cs]},
}
Downloads: 0
{"_id":"WwM8Trr6yvtrtAaCM","bibbaseid":"he-lin-gong-jin-zhang-lin-jiao-yiu-etal-annollmmakinglargelanguagemodelstobebettercrowdsourcedannotators-2024","author_short":["He, X.","Lin, Z.","Gong, Y.","Jin, A.","Zhang, H.","Lin, C.","Jiao, J.","Yiu, S. M.","Duan, N.","Chen, W."],"bibdata":{"bibtype":"misc","type":"misc","title":"AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators","shorttitle":"AnnoLLM","url":"http://arxiv.org/abs/2303.16854","doi":"10.48550/arXiv.2303.16854","abstract":"Many natural language processing (NLP) tasks rely on labeled data to train machine learning models with high performance. However, data annotation is time-consuming and expensive, especially when the task involves a large amount of data or requires specialized domains. Recently, GPT-3.5 series models have demonstrated remarkable few-shot and zero-shot ability across various NLP tasks. In this paper, we first claim that large language models (LLMs), such as GPT-3.5, can serve as an excellent crowdsourced annotator when provided with sufficient guidance and demonstrated examples. Accordingly, we propose AnnoLLM, an annotation system powered by LLMs, which adopts a two-step approach, explain-then-annotate. Concretely, we first prompt LLMs to provide explanations for why the specific ground truth answer/label was assigned for a given example. Then, we construct the few-shot chain-of-thought prompt with the self-generated explanation and employ it to annotate the unlabeled data with LLMs. Our experiment results on three tasks, including user input and keyword relevance assessment, BoolQ, and WiC, demonstrate that AnnoLLM surpasses or performs on par with crowdsourced annotators. Furthermore, we build the first conversation-based information retrieval dataset employing AnnoLLM. This dataset is designed to facilitate the development of retrieval models capable of retrieving pertinent documents for conversational text. Human evaluation has validated the dataset's high quality.","urldate":"2024-04-24","publisher":"arXiv","author":[{"propositions":[],"lastnames":["He"],"firstnames":["Xingwei"],"suffixes":[]},{"propositions":[],"lastnames":["Lin"],"firstnames":["Zhenghao"],"suffixes":[]},{"propositions":[],"lastnames":["Gong"],"firstnames":["Yeyun"],"suffixes":[]},{"propositions":[],"lastnames":["Jin"],"firstnames":["A.-Long"],"suffixes":[]},{"propositions":[],"lastnames":["Zhang"],"firstnames":["Hang"],"suffixes":[]},{"propositions":[],"lastnames":["Lin"],"firstnames":["Chen"],"suffixes":[]},{"propositions":[],"lastnames":["Jiao"],"firstnames":["Jian"],"suffixes":[]},{"propositions":[],"lastnames":["Yiu"],"firstnames":["Siu","Ming"],"suffixes":[]},{"propositions":[],"lastnames":["Duan"],"firstnames":["Nan"],"suffixes":[]},{"propositions":[],"lastnames":["Chen"],"firstnames":["Weizhu"],"suffixes":[]}],"month":"April","year":"2024","note":"arXiv:2303.16854 [cs]","bibtex":"@misc{he_annollm_2024,\n\ttitle = {{AnnoLLM}: {Making} {Large} {Language} {Models} to {Be} {Better} {Crowdsourced} {Annotators}},\n\tshorttitle = {{AnnoLLM}},\n\turl = {http://arxiv.org/abs/2303.16854},\n\tdoi = {10.48550/arXiv.2303.16854},\n\tabstract = {Many natural language processing (NLP) tasks rely on labeled data to train machine learning models with high performance. However, data annotation is time-consuming and expensive, especially when the task involves a large amount of data or requires specialized domains. Recently, GPT-3.5 series models have demonstrated remarkable few-shot and zero-shot ability across various NLP tasks. In this paper, we first claim that large language models (LLMs), such as GPT-3.5, can serve as an excellent crowdsourced annotator when provided with sufficient guidance and demonstrated examples. Accordingly, we propose AnnoLLM, an annotation system powered by LLMs, which adopts a two-step approach, explain-then-annotate. Concretely, we first prompt LLMs to provide explanations for why the specific ground truth answer/label was assigned for a given example. Then, we construct the few-shot chain-of-thought prompt with the self-generated explanation and employ it to annotate the unlabeled data with LLMs. Our experiment results on three tasks, including user input and keyword relevance assessment, BoolQ, and WiC, demonstrate that AnnoLLM surpasses or performs on par with crowdsourced annotators. Furthermore, we build the first conversation-based information retrieval dataset employing AnnoLLM. This dataset is designed to facilitate the development of retrieval models capable of retrieving pertinent documents for conversational text. Human evaluation has validated the dataset's high quality.},\n\turldate = {2024-04-24},\n\tpublisher = {arXiv},\n\tauthor = {He, Xingwei and Lin, Zhenghao and Gong, Yeyun and Jin, A.-Long and Zhang, Hang and Lin, Chen and Jiao, Jian and Yiu, Siu Ming and Duan, Nan and Chen, Weizhu},\n\tmonth = apr,\n\tyear = {2024},\n\tnote = {arXiv:2303.16854 [cs]},\n}\n\n","author_short":["He, X.","Lin, Z.","Gong, Y.","Jin, A.","Zhang, H.","Lin, C.","Jiao, J.","Yiu, S. M.","Duan, N.","Chen, W."],"key":"he_annollm_2024","id":"he_annollm_2024","bibbaseid":"he-lin-gong-jin-zhang-lin-jiao-yiu-etal-annollmmakinglargelanguagemodelstobebettercrowdsourcedannotators-2024","role":"author","urls":{"Paper":"http://arxiv.org/abs/2303.16854"},"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"misc","biburl":"https://bibbase.org/zotero/andreasmartin","dataSources":["jurZeGzSpYdkQ8rm4"],"keywords":[],"search_terms":["annollm","making","large","language","models","better","crowdsourced","annotators","he","lin","gong","jin","zhang","lin","jiao","yiu","duan","chen"],"title":"AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators","year":2024}