HACK: Homomorphic Acceleration via Compression of the Key-Value Cache for Disaggregated LLM Inference. Zhang, Z., Shen, H., Vargaftik, S., Basat, R. B., Mitzenmacher, M., & Yu, M. In Coimbra, Portugal, September, 2025. https://github.com/pcl-projects/HACK
HACK: Homomorphic Acceleration via Compression of the Key-Value Cache for Disaggregated LLM Inference [link]Paper  doi  abstract   bibtex   
Disaggregated Large Language Model (LLM) inference decouples the compute-intensive prefill stage from the memory-intensive decode stage, allowing low-end, compute-focused GPUs for prefill and high-end, memory-rich GPUs for decode, which reduces cost while maintaining high throughput. However, transmitting Key-Value (KV) data between the two stages can be a bottleneck, especially for long prompts. Additionally, the computational overhead in the two stages is key for optimizing Job Completion Time (JCT), and KV data size can become prohibitive for long prompts and sequences. Existing KV quantization methods can alleviate transmission and memory bottlenecks, but they introduce significant dequantization overhead, exacerbating the computation time.
@inproceedings{zhang_hack_2025,
	address = {Coimbra, Portugal},
	title = {{HACK}: {Homomorphic} {Acceleration} via {Compression} of the {Key}-{Value} {Cache} for {Disaggregated} {LLM} {Inference}},
	url = {https://doi.org/10.1145/3718958.3750481},
	doi = {https://doi.org/10.1145/3718958.3750481},
	abstract = {Disaggregated Large Language Model (LLM) inference decouples the compute-intensive prefill stage from the memory-intensive decode stage, allowing low-end, compute-focused GPUs for prefill and high-end, memory-rich GPUs for decode, which reduces cost while maintaining high throughput. However, transmitting Key-Value (KV) data between the two stages can be a bottleneck, especially for long prompts. Additionally, the computational overhead in the two stages is key for optimizing Job Completion Time (JCT), and KV data size can become prohibitive for long prompts and sequences. Existing KV quantization methods can alleviate transmission and memory bottlenecks, but they introduce significant dequantization overhead, exacerbating the computation time.},
	language = {en},
	author = {Zhang, Zeyu and Shen, Haiying and Vargaftik, Shay and Basat, Ran Ben and Mitzenmacher, Michael and Yu, Minlan},
	month = sep,
	year = {2025},
	note = {https://github.com/pcl-projects/HACK},
}

Downloads: 0