A hybrid quantum-classical neural network for learning transferable visual representation. Wang, R., Richerme, P., & <a href="https://homes.luddy.indiana.edu/fc7/" target="_bilank">Fan Chen</a></span> In Quantum Science and Technology (QST), volume 8, pages 045021, sep, 2023. IOP Publishing.
Paper doi abstract bibtex 6 downloads State-of-the-art quantum machine learning (QML) algorithms fail to offer practical advantages over their notoriously powerful classical counterparts, due to the limited learning capabilities of QML algorithms, the constrained computational resources available on today’s noisy intermediate-scale quantum (NISQ) devices, and the empirically designed circuit ansatz for QML models. In this work, we address these challenges by proposing a hybrid quantum–classical neural network (CaNN), which we call QCLIP, for Quantum Contrastive Language-Image Pre-Training. Rather than training a supervised QML model to predict human annotations, QCLIP focuses on more practical transferable visual representation learning, where the developed model can be generalized to work on unseen downstream datasets. QCLIP is implemented by using CaNNs to generate low-dimensional data feature embeddings followed by quantum neural networks to adapt and generalize the learned representation in the quantum Hilbert space. Experimental results show that the hybrid QCLIP model can be efficiently trained for representation learning. We evaluate the representation transfer capability of QCLIP against the classical Contrastive Language-Image Pre-Training model on various datasets. Simulation results and real-device results on NISQ IBM_Auckland quantum computer both show that the proposed QCLIP model outperforms the classical CLIP model in all test cases. As the field of QML on NISQ devices is continually evolving, we anticipate that this work will serve as a valuable foundation for future research and advancements in this promising area.
@inproceedings{2023QST:QCLIP,
doi = {10.1088/2058-9565/acf1c7},
url = {https://dx.doi.org/10.1088/2058-9565/acf1c7},
year = {2023},
month = {sep},
booktitle = {Quantum Science and Technology (QST)},
publisher = {IOP Publishing},
volume = {8},
number = {4},
pages = {045021},
author = {Ruhan Wang and Philip Richerme and
{<a href="https://homes.luddy.indiana.edu/fc7/" target="_bilank">Fan Chen</a></span>}},
title = {A hybrid quantum-classical neural network for learning transferable visual representation},
abstract = {State-of-the-art quantum machine learning (QML) algorithms fail to offer practical advantages over their notoriously powerful classical counterparts, due to the limited learning capabilities of QML algorithms, the constrained computational resources available on today’s noisy intermediate-scale quantum (NISQ) devices, and the empirically designed circuit ansatz for QML models. In this work, we address these challenges by proposing a hybrid quantum–classical neural network (CaNN), which we call QCLIP, for Quantum Contrastive Language-Image Pre-Training. Rather than training a supervised QML model to predict human annotations, QCLIP focuses on more practical transferable visual representation learning, where the developed model can be generalized to work on unseen downstream datasets. QCLIP is implemented by using CaNNs to generate low-dimensional data feature embeddings followed by quantum neural networks to adapt and generalize the learned representation in the quantum Hilbert space. Experimental results show that the hybrid QCLIP model can be efficiently trained for representation learning. We evaluate the representation transfer capability of QCLIP against the classical Contrastive Language-Image Pre-Training model on various datasets. Simulation results and real-device results on NISQ IBM_Auckland quantum computer both show that the proposed QCLIP model outperforms the classical CLIP model in all test cases. As the field of QML on NISQ devices is continually evolving, we anticipate that this work will serve as a valuable foundation for future research and advancements in this promising area.}
}
Downloads: 6
{"_id":"PwX66Pmu9abvDvEuT","bibbaseid":"wang-richerme-ahrefhttpshomesluddyindianaedufc7targetbilankfanchenaspan-ahybridquantumclassicalneuralnetworkforlearningtransferablevisualrepresentation-2023","author_short":["Wang, R.","Richerme, P.","<a href=\"https://homes.luddy.indiana.edu/fc7/\" target=\"_bilank\">Fan Chen</a></span>"],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","doi":"10.1088/2058-9565/acf1c7","url":"https://dx.doi.org/10.1088/2058-9565/acf1c7","year":"2023","month":"sep","booktitle":"Quantum Science and Technology (QST)","publisher":"IOP Publishing","volume":"8","number":"4","pages":"045021","author":[{"firstnames":["Ruhan"],"propositions":[],"lastnames":["Wang"],"suffixes":[]},{"firstnames":["Philip"],"propositions":[],"lastnames":["Richerme"],"suffixes":[]},{"firstnames":[],"propositions":[],"lastnames":["<a href=\"https://homes.luddy.indiana.edu/fc7/\" target=\"_bilank\">Fan Chen</a></span>"],"suffixes":[]}],"title":"A hybrid quantum-classical neural network for learning transferable visual representation","abstract":"State-of-the-art quantum machine learning (QML) algorithms fail to offer practical advantages over their notoriously powerful classical counterparts, due to the limited learning capabilities of QML algorithms, the constrained computational resources available on today’s noisy intermediate-scale quantum (NISQ) devices, and the empirically designed circuit ansatz for QML models. In this work, we address these challenges by proposing a hybrid quantum–classical neural network (CaNN), which we call QCLIP, for Quantum Contrastive Language-Image Pre-Training. Rather than training a supervised QML model to predict human annotations, QCLIP focuses on more practical transferable visual representation learning, where the developed model can be generalized to work on unseen downstream datasets. QCLIP is implemented by using CaNNs to generate low-dimensional data feature embeddings followed by quantum neural networks to adapt and generalize the learned representation in the quantum Hilbert space. Experimental results show that the hybrid QCLIP model can be efficiently trained for representation learning. We evaluate the representation transfer capability of QCLIP against the classical Contrastive Language-Image Pre-Training model on various datasets. Simulation results and real-device results on NISQ IBM_Auckland quantum computer both show that the proposed QCLIP model outperforms the classical CLIP model in all test cases. As the field of QML on NISQ devices is continually evolving, we anticipate that this work will serve as a valuable foundation for future research and advancements in this promising area.","bibtex":"@inproceedings{2023QST:QCLIP,\ndoi = {10.1088/2058-9565/acf1c7},\nurl = {https://dx.doi.org/10.1088/2058-9565/acf1c7},\nyear = {2023},\nmonth = {sep},\nbooktitle = {Quantum Science and Technology (QST)},\npublisher = {IOP Publishing},\nvolume = {8},\nnumber = {4},\npages = {045021},\nauthor = {Ruhan Wang and Philip Richerme and \n\t{<a href=\"https://homes.luddy.indiana.edu/fc7/\" target=\"_bilank\">Fan Chen</a></span>}},\ntitle = {A hybrid quantum-classical neural network for learning transferable visual representation},\nabstract = {State-of-the-art quantum machine learning (QML) algorithms fail to offer practical advantages over their notoriously powerful classical counterparts, due to the limited learning capabilities of QML algorithms, the constrained computational resources available on today’s noisy intermediate-scale quantum (NISQ) devices, and the empirically designed circuit ansatz for QML models. In this work, we address these challenges by proposing a hybrid quantum–classical neural network (CaNN), which we call QCLIP, for Quantum Contrastive Language-Image Pre-Training. Rather than training a supervised QML model to predict human annotations, QCLIP focuses on more practical transferable visual representation learning, where the developed model can be generalized to work on unseen downstream datasets. QCLIP is implemented by using CaNNs to generate low-dimensional data feature embeddings followed by quantum neural networks to adapt and generalize the learned representation in the quantum Hilbert space. Experimental results show that the hybrid QCLIP model can be efficiently trained for representation learning. We evaluate the representation transfer capability of QCLIP against the classical Contrastive Language-Image Pre-Training model on various datasets. Simulation results and real-device results on NISQ IBM_Auckland quantum computer both show that the proposed QCLIP model outperforms the classical CLIP model in all test cases. As the field of QML on NISQ devices is continually evolving, we anticipate that this work will serve as a valuable foundation for future research and advancements in this promising area.}\n}\n\n","author_short":["Wang, R.","Richerme, P.","<a href=\"https://homes.luddy.indiana.edu/fc7/\" target=\"_bilank\">Fan Chen</a></span>"],"key":"2023QST:QCLIP","id":"2023QST:QCLIP","bibbaseid":"wang-richerme-ahrefhttpshomesluddyindianaedufc7targetbilankfanchenaspan-ahybridquantumclassicalneuralnetworkforlearningtransferablevisualrepresentation-2023","role":"author","urls":{"Paper":"https://dx.doi.org/10.1088/2058-9565/acf1c7"},"metadata":{"authorlinks":{}},"downloads":6},"bibtype":"inproceedings","biburl":"https://homes.luddy.indiana.edu/fc7/fan-publication.bib","dataSources":["SzsArXknrJasD2EPT"],"keywords":[],"search_terms":["hybrid","quantum","classical","neural","network","learning","transferable","visual","representation","wang","richerme","<a href=\"https://homes.luddy.indiana.edu/fc7/\" target=\"_bilank\">fan chen</a></span>"],"title":"A hybrid quantum-classical neural network for learning transferable visual representation","year":2023,"downloads":6}