PropMEND: Hypernetworks for Knowledge Propagation in LLMs. Liu, Z. L., Durrett, G., & Choi, E. June, 2025. arXiv:2506.08920 [cs]
Paper doi abstract bibtex Knowledge editing techniques for large language models (LLMs) can inject knowledge that is later reproducible verbatim, but they fall short on propagating that knowledge: models cannot answer questions that require reasoning with the injected knowledge. We present a hypernetwork-based approach for knowledge propagation, named PropMEND, where we meta-learn how to modify gradients of a language modeling loss to encourage injected information to propagate. Our approach extends the meta-objective of MEND [29] so that gradient updates on knowledge are transformed to enable answering multi-hop questions involving that knowledge. We show improved performance on the RippleEdit dataset, showing almost 2× accuracy on challenging multi-hop questions whose answers are not explicitly stated in the injected fact. We further introduce a new dataset, Controlled RippleEdit, to evaluate the generalization of our hypernetwork, testing knowledge propagation along relations and entities unseen during hypernetwork training. PropMEND still outperforms existing approaches in unseen entity-relation pairs, yet the performance gap decreases substantially, suggesting future work in propagating knowledge to a wide range of relations.
@misc{liu_propmend_2025,
title = {{PropMEND}: {Hypernetworks} for {Knowledge} {Propagation} in {LLMs}},
shorttitle = {{PropMEND}},
url = {http://arxiv.org/abs/2506.08920},
doi = {10.48550/arXiv.2506.08920},
abstract = {Knowledge editing techniques for large language models (LLMs) can inject knowledge that is later reproducible verbatim, but they fall short on propagating that knowledge: models cannot answer questions that require reasoning with the injected knowledge. We present a hypernetwork-based approach for knowledge propagation, named PropMEND, where we meta-learn how to modify gradients of a language modeling loss to encourage injected information to propagate. Our approach extends the meta-objective of MEND [29] so that gradient updates on knowledge are transformed to enable answering multi-hop questions involving that knowledge. We show improved performance on the RippleEdit dataset, showing almost 2× accuracy on challenging multi-hop questions whose answers are not explicitly stated in the injected fact. We further introduce a new dataset, Controlled RippleEdit, to evaluate the generalization of our hypernetwork, testing knowledge propagation along relations and entities unseen during hypernetwork training. PropMEND still outperforms existing approaches in unseen entity-relation pairs, yet the performance gap decreases substantially, suggesting future work in propagating knowledge to a wide range of relations.},
language = {en},
urldate = {2025-08-28},
publisher = {arXiv},
author = {Liu, Zeyu Leo and Durrett, Greg and Choi, Eunsol},
month = jun,
year = {2025},
note = {arXiv:2506.08920 [cs]},
keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Machine Learning, Explorable},
}
Downloads: 0
{"_id":"XZQhHfTYNaSjTr7fH","bibbaseid":"liu-durrett-choi-propmendhypernetworksforknowledgepropagationinllms-2025","author_short":["Liu, Z. L.","Durrett, G.","Choi, E."],"bibdata":{"bibtype":"misc","type":"misc","title":"PropMEND: Hypernetworks for Knowledge Propagation in LLMs","shorttitle":"PropMEND","url":"http://arxiv.org/abs/2506.08920","doi":"10.48550/arXiv.2506.08920","abstract":"Knowledge editing techniques for large language models (LLMs) can inject knowledge that is later reproducible verbatim, but they fall short on propagating that knowledge: models cannot answer questions that require reasoning with the injected knowledge. We present a hypernetwork-based approach for knowledge propagation, named PropMEND, where we meta-learn how to modify gradients of a language modeling loss to encourage injected information to propagate. Our approach extends the meta-objective of MEND [29] so that gradient updates on knowledge are transformed to enable answering multi-hop questions involving that knowledge. We show improved performance on the RippleEdit dataset, showing almost 2× accuracy on challenging multi-hop questions whose answers are not explicitly stated in the injected fact. We further introduce a new dataset, Controlled RippleEdit, to evaluate the generalization of our hypernetwork, testing knowledge propagation along relations and entities unseen during hypernetwork training. PropMEND still outperforms existing approaches in unseen entity-relation pairs, yet the performance gap decreases substantially, suggesting future work in propagating knowledge to a wide range of relations.","language":"en","urldate":"2025-08-28","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Liu"],"firstnames":["Zeyu","Leo"],"suffixes":[]},{"propositions":[],"lastnames":["Durrett"],"firstnames":["Greg"],"suffixes":[]},{"propositions":[],"lastnames":["Choi"],"firstnames":["Eunsol"],"suffixes":[]}],"month":"June","year":"2025","note":"arXiv:2506.08920 [cs]","keywords":"Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Machine Learning, Explorable","bibtex":"@misc{liu_propmend_2025,\n\ttitle = {{PropMEND}: {Hypernetworks} for {Knowledge} {Propagation} in {LLMs}},\n\tshorttitle = {{PropMEND}},\n\turl = {http://arxiv.org/abs/2506.08920},\n\tdoi = {10.48550/arXiv.2506.08920},\n\tabstract = {Knowledge editing techniques for large language models (LLMs) can inject knowledge that is later reproducible verbatim, but they fall short on propagating that knowledge: models cannot answer questions that require reasoning with the injected knowledge. We present a hypernetwork-based approach for knowledge propagation, named PropMEND, where we meta-learn how to modify gradients of a language modeling loss to encourage injected information to propagate. Our approach extends the meta-objective of MEND [29] so that gradient updates on knowledge are transformed to enable answering multi-hop questions involving that knowledge. We show improved performance on the RippleEdit dataset, showing almost 2× accuracy on challenging multi-hop questions whose answers are not explicitly stated in the injected fact. We further introduce a new dataset, Controlled RippleEdit, to evaluate the generalization of our hypernetwork, testing knowledge propagation along relations and entities unseen during hypernetwork training. PropMEND still outperforms existing approaches in unseen entity-relation pairs, yet the performance gap decreases substantially, suggesting future work in propagating knowledge to a wide range of relations.},\n\tlanguage = {en},\n\turldate = {2025-08-28},\n\tpublisher = {arXiv},\n\tauthor = {Liu, Zeyu Leo and Durrett, Greg and Choi, Eunsol},\n\tmonth = jun,\n\tyear = {2025},\n\tnote = {arXiv:2506.08920 [cs]},\n\tkeywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Machine Learning, Explorable},\n}\n\n\n\n","author_short":["Liu, Z. L.","Durrett, G.","Choi, E."],"key":"liu_propmend_2025","id":"liu_propmend_2025","bibbaseid":"liu-durrett-choi-propmendhypernetworksforknowledgepropagationinllms-2025","role":"author","urls":{"Paper":"http://arxiv.org/abs/2506.08920"},"keyword":["Computer Science - Artificial Intelligence","Computer Science - Computation and Language","Computer Science - Machine Learning","Explorable"],"metadata":{"authorlinks":{}}},"bibtype":"misc","biburl":"https://bibbase.org/zotero-group/pratikmhatre/5933976","dataSources":["yJr5AAtJ5Sz3Q4WT4"],"keywords":["computer science - artificial intelligence","computer science - computation and language","computer science - machine learning","explorable"],"search_terms":["propmend","hypernetworks","knowledge","propagation","llms","liu","durrett","choi"],"title":"PropMEND: Hypernetworks for Knowledge Propagation in LLMs","year":2025}