EMAT: an efficient multi-task architecture for transfer learning using ReRAM. <a href="https://homes.luddy.indiana.edu/fc7/" target="_blank">Chen, Fan</a></span> & Li, H. In Proceedings of the International Conference on Computer-Aided Design (ICCAD), pages 33, 2018. ACM.
EMAT: an efficient multi-task architecture for transfer learning using ReRAM [link]Paper  doi  abstract   bibtex   
Transfer learning has demonstrated a great success recently towards general supervised learning to mitigate expensive training efforts. However, existing neural network accelerators have been proven inefficient in executing transfer learning by failing to accommodate the layer-wise heterogeneity in computation and memory requirements. In this work, we propose EMAT-an efficient multi-task architecture for transfer learning built on resistive memory (ReRAM) technology. EMAT utilizes the energy-efficiency of ReRAM arrays for matrix-vector multiplication and realizes a hierarchical reconfigurable design with heterogeneous computation components to incorporate the data patterns in transfer learning. Compared to the GPU platform, EMAT can perform averagely 120x performance speedup and 87x energy saving. EMAT also obtains 2.5x speedup compared to the-state-of-the-art CMOS accelerator.
@inproceedings{ICCAD18,
  author    = {{<a href="https://homes.luddy.indiana.edu/fc7/" target="_blank">Chen, Fan</a></span>} and
               Hai Li},
  title     = {{EMAT:} an efficient multi-task architecture for transfer learning
               using ReRAM},
  booktitle = {Proceedings of the International Conference on Computer-Aided Design (ICCAD)},
  pages     = {33},
  publisher = {{ACM}},
  year      = {2018},
  url       = {https://doi.org/10.1145/3240765.3240805},
  doi       = {10.1145/3240765.3240805},
  abstract  = {Transfer learning has demonstrated a great success recently towards general supervised learning to mitigate expensive training efforts. However, existing neural network accelerators have been proven inefficient in executing transfer learning by failing to accommodate the layer-wise heterogeneity in computation and memory requirements. In this work, we propose EMAT-an efficient multi-task architecture for transfer learning built on resistive memory (ReRAM) technology. EMAT utilizes the energy-efficiency of ReRAM arrays for matrix-vector multiplication and realizes a hierarchical reconfigurable design with heterogeneous computation components to incorporate the data patterns in transfer learning. Compared to the GPU platform, EMAT can perform averagely 120x performance speedup and 87x energy saving. EMAT also obtains 2.5x speedup compared to the-state-of-the-art CMOS accelerator.},
}

Downloads: 0