Quantum Language Model with Entanglement Embedding for Question Answering. Chen, Y., Pan, Y., & Dong, D. IEEE Transactions on Cybernetics, 2021. 1 citations (Crossref) [2022-09-26] arXiv:2008.09943 [quant-ph]
Quantum Language Model with Entanglement Embedding for Question Answering [link]Paper  doi  abstract   bibtex   
Quantum Language Models (QLMs) in which words are modelled as a quantum superposition of sememes have demonstrated a high level of model transparency and good posthoc interpretability. Nevertheless, in the current literature word sequences are basically modelled as a classical mixture of word states, which cannot fully exploit the potential of a quantum probabilistic description. A quantum-inspired neural network module is yet to be developed to explicitly capture the nonclassical correlations within the word sequences. We propose a neural network model with a novel Entanglement Embedding (EE) module, whose function is to transform the word sequence into an entangled pure state representation. Strong quantum entanglement, which is the central concept of quantum information and an indication of parallelized correlations among the words, is observed within the word sequences. The proposed QLM with EE (QLM-EE) is proposed to implement on classical computing devices with a quantum-inspired neural network structure, and numerical experiments show that QLM-EE achieves superior performance compared with the classical deep neural network models and other QLMs on Question Answering (QA) datasets. In addition, the post-hoc interpretability of the model can be improved by quantifying the degree of entanglement among the word states.
@article{chen_quantum_2021,
	title = {Quantum {Language} {Model} with {Entanglement} {Embedding} for {Question} {Answering}},
	issn = {2168-2267, 2168-2275},
	url = {http://arxiv.org/abs/2008.09943},
	doi = {10.1109/TCYB.2021.3131252},
	abstract = {Quantum Language Models (QLMs) in which words are modelled as a quantum superposition of sememes have demonstrated a high level of model transparency and good posthoc interpretability. Nevertheless, in the current literature word sequences are basically modelled as a classical mixture of word states, which cannot fully exploit the potential of a quantum probabilistic description. A quantum-inspired neural network module is yet to be developed to explicitly capture the nonclassical correlations within the word sequences. We propose a neural network model with a novel Entanglement Embedding (EE) module, whose function is to transform the word sequence into an entangled pure state representation. Strong quantum entanglement, which is the central concept of quantum information and an indication of parallelized correlations among the words, is observed within the word sequences. The proposed QLM with EE (QLM-EE) is proposed to implement on classical computing devices with a quantum-inspired neural network structure, and numerical experiments show that QLM-EE achieves superior performance compared with the classical deep neural network models and other QLMs on Question Answering (QA) datasets. In addition, the post-hoc interpretability of the model can be improved by quantifying the degree of entanglement among the word states.},
	language = {en},
	urldate = {2022-09-26},
	journal = {IEEE Transactions on Cybernetics},
	author = {Chen, Yiwei and Pan, Yu and Dong, Daoyi},
	year = {2021},
	note = {1 citations (Crossref) [2022-09-26]
arXiv:2008.09943 [quant-ph]},
	keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Quantum Physics},
	pages = {1--12},
}

Downloads: 0