Improving Summarization with Human Edits. Yao, Z., Schloss, B. J., & Selvaraj, S. P. December, 2023. EMNLP 2023
Improving Summarization with Human Edits [link]Paper  abstract   bibtex   1 download  
Recent work has shown the promise of learning with human feedback paradigms to produce human-determined high-quality text. Existing works use human feedback to train large language models (LLMs) in general domain abstractive summarization and have obtained summary quality exceeding traditional likelihood training. In this paper, we focus on a less explored form of human feedback – Human Edits. We propose Sequence Alignment (un)Likelihood Training (SALT), a novel technique to use both the human-edited and model-generated data together in the training loop. In addition, we demonstrate simulating Human Edits with ground truth summaries coming from existing training data – Imitation edits, along with the model-generated summaries obtained after the training, to reduce the need for expensive human-edit data. In our experiments, we extend human feedback exploration from general domain summarization to medical domain summarization. Our results demonstrate the effectiveness of SALT to improve the summary quality with Human and Imitation Edits.
@misc{yao_improving_2023,
	title = {Improving {Summarization} with {Human} {Edits}},
	url = {http://arxiv.org/abs/2310.05857},
	abstract = {Recent work has shown the promise of learning with human feedback paradigms to produce human-determined high-quality text. Existing works use human feedback to train large language models (LLMs) in general domain abstractive summarization and have obtained summary quality exceeding traditional likelihood training. In this paper, we focus on a less explored form of human feedback -- Human Edits. We propose Sequence Alignment (un)Likelihood Training (SALT), a novel technique to use both the human-edited and model-generated data together in the training loop. In addition, we demonstrate simulating Human Edits with ground truth summaries coming from existing training data -- Imitation edits, along with the model-generated summaries obtained after the training, to reduce the need for expensive human-edit data. In our experiments, we extend human feedback exploration from general domain summarization to medical domain summarization. Our results demonstrate the effectiveness of SALT to improve the summary quality with Human and Imitation Edits.},
	urldate = {2023-10-10},
	publisher = {arXiv},
	author = {Yao, Zonghai and Schloss, Benjamin J. and Selvaraj, Sai P.},
	month = dec,
	year = {2023},
	note = {EMNLP 2023},
	keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Machine Learning},
}

Downloads: 1