Liquid Silicon: A Nonvolatile Fully Programmable Processing-In-Memory Processor with Monolithically Integrated ReRAM for Big Data/Machine Learning Applications. Zha<sup>S</sup>, Y., Nowak, E., & Li, J. In 2019 IEEE Symposium on VLSI Circuits, Jun, 2019.
doi  abstract   bibtex   
A nonvolatile fully programmable processing-in-memory (PIM) processor named Liquid Silicon (L-Si) is demonstrated, which combines the superior programmability of general-purpose computing devices (e.g. FPGA) and the high power efficiency of do-main-specific accelerators. Besides the general computing applications, L-Si is particularly well suited for AI/machine learning and big data applications, which not only pose high computational/memory demand but also evolves rapidly. L-Si is fabricated by monolithically integrating HfO 2 resistive RAM on top of commercial 130nm Si CMOS. Our measurement confirmed the fabricated chip operates reliably at low voltage of 650 mV. It achieves 60.9 TOPS/W in performing neural network inferences and 480 GOPS/W in performing content-based similarity search (a key big data application) at nominal voltage supply of 1.2V, showing >3× and ∼100× power efficiency improvement over the state-of-the-art domain-specific CMOS-/RRAM-based accelerators. In addition, it outperforms the latest nonvolatile FPGA in energy efficiency by ∼3× in general compute-intensive applications.
@inproceedings{zha2019vlsic,
 author = {Zha<sup>S</sup>, Yue and Nowak, Etienne and Li, Jing},
 title = {{Liquid Silicon}: A Nonvolatile Fully Programmable Processing-In-Memory Processor with Monolithically Integrated {ReRAM} for {Big Data/Machine Learning} Applications},
 booktitle = {2019 IEEE Symposium on VLSI Circuits},
 year = {2019},
 month = {Jun},
 date={2019-06-09},
 %pubstate = {forthcoming},
 note = {},
 doi={10.23919/VLSIC.2019.8778064},
 abstract={A nonvolatile fully programmable processing-in-memory (PIM) processor named Liquid Silicon (L-Si) is demonstrated, which combines the superior programmability of general-purpose computing devices (e.g. FPGA) and the high power efficiency of do-main-specific accelerators. Besides the general computing applications, L-Si is particularly well suited for AI/machine learning and big data applications, which not only pose high computational/memory demand but also evolves rapidly. L-Si is fabricated by monolithically integrating HfO 2 resistive RAM on top of commercial 130nm Si CMOS. Our measurement confirmed the fabricated chip operates reliably at low voltage of 650 mV. It achieves 60.9 TOPS/W in performing neural network inferences and 480 GOPS/W in performing content-based similarity search (a key big data application) at nominal voltage supply of 1.2V, showing >3× and ∼100× power efficiency improvement over the state-of-the-art domain-specific CMOS-/RRAM-based accelerators. In addition, it outperforms the latest nonvolatile FPGA in energy efficiency by ∼3× in general compute-intensive applications.},
 keywords = {conference}
}

Downloads: 0