PropMEND: Hypernetworks for Knowledge Propagation in LLMs. Liu, Z. L., Durrett, G., & Choi, E. June, 2025. arXiv:2506.08920 [cs]
PropMEND: Hypernetworks for Knowledge Propagation in LLMs [link]Paper  doi  abstract   bibtex   
Knowledge editing techniques for large language models (LLMs) can inject knowledge that is later reproducible verbatim, but they fall short on propagating that knowledge: models cannot answer questions that require reasoning with the injected knowledge. We present a hypernetwork-based approach for knowledge propagation, named PropMEND, where we meta-learn how to modify gradients of a language modeling loss to encourage injected information to propagate. Our approach extends the meta-objective of MEND [29] so that gradient updates on knowledge are transformed to enable answering multi-hop questions involving that knowledge. We show improved performance on the RippleEdit dataset, showing almost 2× accuracy on challenging multi-hop questions whose answers are not explicitly stated in the injected fact. We further introduce a new dataset, Controlled RippleEdit, to evaluate the generalization of our hypernetwork, testing knowledge propagation along relations and entities unseen during hypernetwork training. PropMEND still outperforms existing approaches in unseen entity-relation pairs, yet the performance gap decreases substantially, suggesting future work in propagating knowledge to a wide range of relations.
@misc{liu_propmend_2025,
	title = {{PropMEND}: {Hypernetworks} for {Knowledge} {Propagation} in {LLMs}},
	shorttitle = {{PropMEND}},
	url = {http://arxiv.org/abs/2506.08920},
	doi = {10.48550/arXiv.2506.08920},
	abstract = {Knowledge editing techniques for large language models (LLMs) can inject knowledge that is later reproducible verbatim, but they fall short on propagating that knowledge: models cannot answer questions that require reasoning with the injected knowledge. We present a hypernetwork-based approach for knowledge propagation, named PropMEND, where we meta-learn how to modify gradients of a language modeling loss to encourage injected information to propagate. Our approach extends the meta-objective of MEND [29] so that gradient updates on knowledge are transformed to enable answering multi-hop questions involving that knowledge. We show improved performance on the RippleEdit dataset, showing almost 2× accuracy on challenging multi-hop questions whose answers are not explicitly stated in the injected fact. We further introduce a new dataset, Controlled RippleEdit, to evaluate the generalization of our hypernetwork, testing knowledge propagation along relations and entities unseen during hypernetwork training. PropMEND still outperforms existing approaches in unseen entity-relation pairs, yet the performance gap decreases substantially, suggesting future work in propagating knowledge to a wide range of relations.},
	language = {en},
	urldate = {2025-08-28},
	publisher = {arXiv},
	author = {Liu, Zeyu Leo and Durrett, Greg and Choi, Eunsol},
	month = jun,
	year = {2025},
	note = {arXiv:2506.08920 [cs]},
	keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Machine Learning, Explorable},
}

Downloads: 0