Decomposed Prompting: A Modular Approach for Solving Complex Tasks. Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., & Sabharwal, A. 2022. Publisher: [object Object] Version Number: 2
Paper doi abstract bibtex Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual reasoning steps of the task themselves are hard to learn, especially when embedded in more complex tasks. To address this, we propose Decomposed Prompting, a new approach to solve complex tasks by decomposing them (via prompting) into simpler sub-tasks that can be delegated to a library of prompting-based LLMs dedicated to these sub-tasks. This modular structure allows each prompt to be optimized for its specific sub-task, further decomposed if necessary, and even easily replaced with more effective prompts, trained models, or symbolic functions if desired. We show that the flexibility and modularity of Decomposed Prompting allows it to outperform prior work on few-shot prompting using GPT3. On symbolic reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into even simpler solvable sub-tasks. When the complexity comes from the input length, we can recursively decompose the task into the same task but with smaller inputs. We also evaluate our approach on textual multi-step reasoning tasks: on long-context multi-hop QA task, we can more effectively teach the sub-tasks via our separate sub-tasks prompts; and on open-domain multi-hop QA, we can incorporate a symbolic information retrieval within our decomposition framework, leading to improved performance on both tasks. Datasets, Code and Prompts available at https://github.com/allenai/DecomP.
@article{khot_decomposed_2022,
title = {Decomposed {Prompting}: {A} {Modular} {Approach} for {Solving} {Complex} {Tasks}},
copyright = {Creative Commons Attribution 4.0 International},
shorttitle = {Decomposed {Prompting}},
url = {https://arxiv.org/abs/2210.02406},
doi = {10.48550/ARXIV.2210.02406},
abstract = {Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual reasoning steps of the task themselves are hard to learn, especially when embedded in more complex tasks. To address this, we propose Decomposed Prompting, a new approach to solve complex tasks by decomposing them (via prompting) into simpler sub-tasks that can be delegated to a library of prompting-based LLMs dedicated to these sub-tasks. This modular structure allows each prompt to be optimized for its specific sub-task, further decomposed if necessary, and even easily replaced with more effective prompts, trained models, or symbolic functions if desired. We show that the flexibility and modularity of Decomposed Prompting allows it to outperform prior work on few-shot prompting using GPT3. On symbolic reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into even simpler solvable sub-tasks. When the complexity comes from the input length, we can recursively decompose the task into the same task but with smaller inputs. We also evaluate our approach on textual multi-step reasoning tasks: on long-context multi-hop QA task, we can more effectively teach the sub-tasks via our separate sub-tasks prompts; and on open-domain multi-hop QA, we can incorporate a symbolic information retrieval within our decomposition framework, leading to improved performance on both tasks. Datasets, Code and Prompts available at https://github.com/allenai/DecomP.},
urldate = {2024-03-05},
author = {Khot, Tushar and Trivedi, Harsh and Finlayson, Matthew and Fu, Yao and Richardson, Kyle and Clark, Peter and Sabharwal, Ashish},
year = {2022},
note = {Publisher: [object Object]
Version Number: 2},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences},
}
Downloads: 0
{"_id":"h39xBL7XvkT73Qb8d","bibbaseid":"khot-trivedi-finlayson-fu-richardson-clark-sabharwal-decomposedpromptingamodularapproachforsolvingcomplextasks-2022","author_short":["Khot, T.","Trivedi, H.","Finlayson, M.","Fu, Y.","Richardson, K.","Clark, P.","Sabharwal, A."],"bibdata":{"bibtype":"article","type":"article","title":"Decomposed Prompting: A Modular Approach for Solving Complex Tasks","copyright":"Creative Commons Attribution 4.0 International","shorttitle":"Decomposed Prompting","url":"https://arxiv.org/abs/2210.02406","doi":"10.48550/ARXIV.2210.02406","abstract":"Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual reasoning steps of the task themselves are hard to learn, especially when embedded in more complex tasks. To address this, we propose Decomposed Prompting, a new approach to solve complex tasks by decomposing them (via prompting) into simpler sub-tasks that can be delegated to a library of prompting-based LLMs dedicated to these sub-tasks. This modular structure allows each prompt to be optimized for its specific sub-task, further decomposed if necessary, and even easily replaced with more effective prompts, trained models, or symbolic functions if desired. We show that the flexibility and modularity of Decomposed Prompting allows it to outperform prior work on few-shot prompting using GPT3. On symbolic reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into even simpler solvable sub-tasks. When the complexity comes from the input length, we can recursively decompose the task into the same task but with smaller inputs. We also evaluate our approach on textual multi-step reasoning tasks: on long-context multi-hop QA task, we can more effectively teach the sub-tasks via our separate sub-tasks prompts; and on open-domain multi-hop QA, we can incorporate a symbolic information retrieval within our decomposition framework, leading to improved performance on both tasks. Datasets, Code and Prompts available at https://github.com/allenai/DecomP.","urldate":"2024-03-05","author":[{"propositions":[],"lastnames":["Khot"],"firstnames":["Tushar"],"suffixes":[]},{"propositions":[],"lastnames":["Trivedi"],"firstnames":["Harsh"],"suffixes":[]},{"propositions":[],"lastnames":["Finlayson"],"firstnames":["Matthew"],"suffixes":[]},{"propositions":[],"lastnames":["Fu"],"firstnames":["Yao"],"suffixes":[]},{"propositions":[],"lastnames":["Richardson"],"firstnames":["Kyle"],"suffixes":[]},{"propositions":[],"lastnames":["Clark"],"firstnames":["Peter"],"suffixes":[]},{"propositions":[],"lastnames":["Sabharwal"],"firstnames":["Ashish"],"suffixes":[]}],"year":"2022","note":"Publisher: [object Object] Version Number: 2","keywords":"Computation and Language (cs.CL), FOS: Computer and information sciences","bibtex":"@article{khot_decomposed_2022,\n\ttitle = {Decomposed {Prompting}: {A} {Modular} {Approach} for {Solving} {Complex} {Tasks}},\n\tcopyright = {Creative Commons Attribution 4.0 International},\n\tshorttitle = {Decomposed {Prompting}},\n\turl = {https://arxiv.org/abs/2210.02406},\n\tdoi = {10.48550/ARXIV.2210.02406},\n\tabstract = {Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual reasoning steps of the task themselves are hard to learn, especially when embedded in more complex tasks. To address this, we propose Decomposed Prompting, a new approach to solve complex tasks by decomposing them (via prompting) into simpler sub-tasks that can be delegated to a library of prompting-based LLMs dedicated to these sub-tasks. This modular structure allows each prompt to be optimized for its specific sub-task, further decomposed if necessary, and even easily replaced with more effective prompts, trained models, or symbolic functions if desired. We show that the flexibility and modularity of Decomposed Prompting allows it to outperform prior work on few-shot prompting using GPT3. On symbolic reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into even simpler solvable sub-tasks. When the complexity comes from the input length, we can recursively decompose the task into the same task but with smaller inputs. We also evaluate our approach on textual multi-step reasoning tasks: on long-context multi-hop QA task, we can more effectively teach the sub-tasks via our separate sub-tasks prompts; and on open-domain multi-hop QA, we can incorporate a symbolic information retrieval within our decomposition framework, leading to improved performance on both tasks. Datasets, Code and Prompts available at https://github.com/allenai/DecomP.},\n\turldate = {2024-03-05},\n\tauthor = {Khot, Tushar and Trivedi, Harsh and Finlayson, Matthew and Fu, Yao and Richardson, Kyle and Clark, Peter and Sabharwal, Ashish},\n\tyear = {2022},\n\tnote = {Publisher: [object Object]\nVersion Number: 2},\n\tkeywords = {Computation and Language (cs.CL), FOS: Computer and information sciences},\n}\n\n\n\n\n\n\n\n\n\n\n\n","author_short":["Khot, T.","Trivedi, H.","Finlayson, M.","Fu, Y.","Richardson, K.","Clark, P.","Sabharwal, A."],"key":"khot_decomposed_2022","id":"khot_decomposed_2022","bibbaseid":"khot-trivedi-finlayson-fu-richardson-clark-sabharwal-decomposedpromptingamodularapproachforsolvingcomplextasks-2022","role":"author","urls":{"Paper":"https://arxiv.org/abs/2210.02406"},"keyword":["Computation and Language (cs.CL)","FOS: Computer and information sciences"],"metadata":{"authorlinks":{}}},"bibtype":"article","biburl":"https://bibbase.org/zotero/abhishek-p","dataSources":["7YhZFTCcSoZRL2zBM","h7kKWXpJh2iaX92T5"],"keywords":["computation and language (cs.cl)","fos: computer and information sciences"],"search_terms":["decomposed","prompting","modular","approach","solving","complex","tasks","khot","trivedi","finlayson","fu","richardson","clark","sabharwal"],"title":"Decomposed Prompting: A Modular Approach for Solving Complex Tasks","year":2022}