ORPO: Monolithic Preference Optimization without Reference Model. Hong, J., Lee, N., & Thorne, J. March, 2024. arXiv:2403.07691 [cs]Paper abstract bibtex While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we study the crucial role of SFT within the context of preference alignment, emphasizing that a minor penalty for the disfavored generation style is sufficient for preference-aligned SFT. Building on this foundation, we introduce a straightforward and innovative reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the necessity for an additional preference alignment phase. We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across the diverse sizes from 125M to 7B. Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the UltraFeedback alone surpasses the performance of state-of-the-art language models with more than 7B and 13B parameters: achieving up to 12.20% on ${\}text\{AlpacaEval\}_\{2.0\}$ (Figure 1), 66.19% on IFEval (instruction-level loose, Table 6), and 7.32 in MT-Bench (Figure 12). We release code and model checkpoints for Mistral-ORPO-${\}alpha$ (7B) and Mistral-ORPO-${\}beta$ (7B).
@misc{hong_orpo_2024,
title = {{ORPO}: {Monolithic} {Preference} {Optimization} without {Reference} {Model}},
shorttitle = {{ORPO}},
url = {http://arxiv.org/abs/2403.07691},
abstract = {While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we study the crucial role of SFT within the context of preference alignment, emphasizing that a minor penalty for the disfavored generation style is sufficient for preference-aligned SFT. Building on this foundation, we introduce a straightforward and innovative reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the necessity for an additional preference alignment phase. We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across the diverse sizes from 125M to 7B. Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the UltraFeedback alone surpasses the performance of state-of-the-art language models with more than 7B and 13B parameters: achieving up to 12.20\% on \${\textbackslash}text\{AlpacaEval\}\_\{2.0\}\$ (Figure 1), 66.19\% on IFEval (instruction-level loose, Table 6), and 7.32 in MT-Bench (Figure 12). We release code and model checkpoints for Mistral-ORPO-\${\textbackslash}alpha\$ (7B) and Mistral-ORPO-\${\textbackslash}beta\$ (7B).},
urldate = {2024-03-16},
publisher = {arXiv},
author = {Hong, Jiwoo and Lee, Noah and Thorne, James},
month = mar,
year = {2024},
note = {arXiv:2403.07691 [cs]},
}
Downloads: 0
{"_id":"xtZ897FxvNZSjRctn","bibbaseid":"hong-lee-thorne-orpomonolithicpreferenceoptimizationwithoutreferencemodel-2024","author_short":["Hong, J.","Lee, N.","Thorne, J."],"bibdata":{"bibtype":"misc","type":"misc","title":"ORPO: Monolithic Preference Optimization without Reference Model","shorttitle":"ORPO","url":"http://arxiv.org/abs/2403.07691","abstract":"While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we study the crucial role of SFT within the context of preference alignment, emphasizing that a minor penalty for the disfavored generation style is sufficient for preference-aligned SFT. Building on this foundation, we introduce a straightforward and innovative reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the necessity for an additional preference alignment phase. We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across the diverse sizes from 125M to 7B. Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the UltraFeedback alone surpasses the performance of state-of-the-art language models with more than 7B and 13B parameters: achieving up to 12.20% on ${\\}text\\{AlpacaEval\\}_\\{2.0\\}$ (Figure 1), 66.19% on IFEval (instruction-level loose, Table 6), and 7.32 in MT-Bench (Figure 12). We release code and model checkpoints for Mistral-ORPO-${\\}alpha$ (7B) and Mistral-ORPO-${\\}beta$ (7B).","urldate":"2024-03-16","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Hong"],"firstnames":["Jiwoo"],"suffixes":[]},{"propositions":[],"lastnames":["Lee"],"firstnames":["Noah"],"suffixes":[]},{"propositions":[],"lastnames":["Thorne"],"firstnames":["James"],"suffixes":[]}],"month":"March","year":"2024","note":"arXiv:2403.07691 [cs]","bibtex":"@misc{hong_orpo_2024,\n\ttitle = {{ORPO}: {Monolithic} {Preference} {Optimization} without {Reference} {Model}},\n\tshorttitle = {{ORPO}},\n\turl = {http://arxiv.org/abs/2403.07691},\n\tabstract = {While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we study the crucial role of SFT within the context of preference alignment, emphasizing that a minor penalty for the disfavored generation style is sufficient for preference-aligned SFT. Building on this foundation, we introduce a straightforward and innovative reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the necessity for an additional preference alignment phase. We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across the diverse sizes from 125M to 7B. Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the UltraFeedback alone surpasses the performance of state-of-the-art language models with more than 7B and 13B parameters: achieving up to 12.20\\% on \\${\\textbackslash}text\\{AlpacaEval\\}\\_\\{2.0\\}\\$ (Figure 1), 66.19\\% on IFEval (instruction-level loose, Table 6), and 7.32 in MT-Bench (Figure 12). We release code and model checkpoints for Mistral-ORPO-\\${\\textbackslash}alpha\\$ (7B) and Mistral-ORPO-\\${\\textbackslash}beta\\$ (7B).},\n\turldate = {2024-03-16},\n\tpublisher = {arXiv},\n\tauthor = {Hong, Jiwoo and Lee, Noah and Thorne, James},\n\tmonth = mar,\n\tyear = {2024},\n\tnote = {arXiv:2403.07691 [cs]},\n}\n\n","author_short":["Hong, J.","Lee, N.","Thorne, J."],"key":"hong_orpo_2024","id":"hong_orpo_2024","bibbaseid":"hong-lee-thorne-orpomonolithicpreferenceoptimizationwithoutreferencemodel-2024","role":"author","urls":{"Paper":"http://arxiv.org/abs/2403.07691"},"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"misc","biburl":"https://bibbase.org/zotero/andreasmartin","dataSources":["jurZeGzSpYdkQ8rm4"],"keywords":[],"search_terms":["orpo","monolithic","preference","optimization","without","reference","model","hong","lee","thorne"],"title":"ORPO: Monolithic Preference Optimization without Reference Model","year":2024}