A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers. Potok, T. E., Schuman, C. D., Young, S. R., Patton, R. M., Spedalieri, F., Liu, J., Yao, K., Rose, G., & Chakma, G. In Proceedings of the Workshop on Machine Learning in High Performance Computing Environments, of MLHPC '16, pages 47–55, Piscataway, NJ, USA, 2016. IEEE Press.
A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers [link]Paper  doi  abstract   bibtex   
Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.
@inproceedings{potok_study_2016,
	address = {Piscataway, NJ, USA},
	series = {{MLHPC} '16},
	title = {A {Study} of {Complex} {Deep} {Learning} {Networks} on {High} {Performance}, {Neuromorphic}, and {Quantum} {Computers}},
	isbn = {978-1-5090-3882-4},
	url = {https://doi.org/10.1109/MLHPC.2016.9},
	doi = {10.1109/MLHPC.2016.9},
	abstract = {Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.},
	urldate = {2017-01-13},
	booktitle = {Proceedings of the {Workshop} on {Machine} {Learning} in {High} {Performance} {Computing} {Environments}},
	publisher = {IEEE Press},
	author = {Potok, Thomas E. and Schuman, Catherine D. and Young, Steven R. and Patton, Robert M. and Spedalieri, Federico and Liu, Jeremy and Yao, Ke-Thia and Rose, Garrett and Chakma, Gangotree},
	year = {2016},
	pages = {47--55},
	file = {ACM Full Text PDF:C\:\\Users\\ktyao\\Zotero\\storage\\ZG6M9PZ8\\Potok et al. - 2016 - A Study of Complex Deep Learning Networks on High .pdf:application/pdf},
}

Downloads: 0