Scaling and evaluating sparse autoencoders. Gao, L., Tour, T. D. l., Tillman, H., Goh, G., Troll, R., Radford, A., Sutskever, I., Leike, J., & Wu, J. June, 2024. arXiv:2406.04093 [cs] version: 1 TLDR: This work proposes using k-sparse autoencoders to directly control sparsity, simplifying tuning and improving the reconstruction-sparsity frontier, and introduces several new metrics for evaluating feature quality based on the recovery of hypothesized features, the explainability of activation patterns, and the sparsity of downstream effects that generally improve with autoencoder size.
Scaling and evaluating sparse autoencoders [link]Paper  doi  abstract   bibtex   
Sparse autoencoders provide a promising unsupervised approach for extracting interpretable features from a language model by reconstructing activations from a sparse bottleneck layer. Since language models learn many concepts, autoencoders need to be very large to recover all relevant features. However, studying the properties of autoencoder scaling is difficult due to the need to balance reconstruction and sparsity objectives and the presence of dead latents. We propose using k-sparse autoencoders [Makhzani and Frey, 2013] to directly control sparsity, simplifying tuning and improving the reconstruction-sparsity frontier. Additionally, we find modifications that result in few dead latents, even at the largest scales we tried. Using these techniques, we find clean scaling laws with respect to autoencoder size and sparsity. We also introduce several new metrics for evaluating feature quality based on the recovery of hypothesized features, the explainability of activation patterns, and the sparsity of downstream effects. These metrics all generally improve with autoencoder size. To demonstrate the scalability of our approach, we train a 16 million latent autoencoder on GPT-4 activations for 40 billion tokens. We release training code and autoencoders for open-source models, as well as a visualizer.
@misc{gao_scaling_2024,
	title = {Scaling and evaluating sparse autoencoders},
	url = {http://arxiv.org/abs/2406.04093},
	doi = {10.48550/arXiv.2406.04093},
	abstract = {Sparse autoencoders provide a promising unsupervised approach for extracting interpretable features from a language model by reconstructing activations from a sparse bottleneck layer. Since language models learn many concepts, autoencoders need to be very large to recover all relevant features. However, studying the properties of autoencoder scaling is difficult due to the need to balance reconstruction and sparsity objectives and the presence of dead latents. We propose using k-sparse autoencoders [Makhzani and Frey, 2013] to directly control sparsity, simplifying tuning and improving the reconstruction-sparsity frontier. Additionally, we find modifications that result in few dead latents, even at the largest scales we tried. Using these techniques, we find clean scaling laws with respect to autoencoder size and sparsity. We also introduce several new metrics for evaluating feature quality based on the recovery of hypothesized features, the explainability of activation patterns, and the sparsity of downstream effects. These metrics all generally improve with autoencoder size. To demonstrate the scalability of our approach, we train a 16 million latent autoencoder on GPT-4 activations for 40 billion tokens. We release training code and autoencoders for open-source models, as well as a visualizer.},
	urldate = {2025-01-15},
	publisher = {arXiv},
	author = {Gao, Leo and Tour, Tom Dupré la and Tillman, Henk and Goh, Gabriel and Troll, Rajan and Radford, Alec and Sutskever, Ilya and Leike, Jan and Wu, Jeffrey},
	month = jun,
	year = {2024},
	note = {arXiv:2406.04093 [cs]
version: 1
TLDR: This work proposes using k-sparse autoencoders to directly control sparsity, simplifying tuning and improving the reconstruction-sparsity frontier, and introduces several new metrics for evaluating feature quality based on the recovery of hypothesized features, the explainability of activation patterns, and the sparsity of downstream effects that generally improve with autoencoder size.},
	keywords = {\#ICLR{\textgreater}25, Computer Science - Artificial Intelligence, Computer Science - Machine Learning, ❤️},
}

Downloads: 0