An Attention Free Transformer. Zhai, S., Talbott, W., Srivastava, N., Huang, C., Goh, H., Zhang, R., & Susskind, J. 5, 2021. Paper Website abstract bibtex We introduce Attention Free Transformer (AFT), an efficient variant of
Transformers that eliminates the need for dot product self attention. In an AFT
layer, the key and value are first combined with a set of learned position
biases, the result of which is multiplied with the query in an element-wise
fashion. This new operation has a memory complexity linear w.r.t. both the
context size and the dimension of features, making it compatible to both large
input and model sizes. We also introduce AFT-local and AFT-conv, two model
variants that take advantage of the idea of locality and spatial weight sharing
while maintaining global connectivity. We conduct extensive experiments on two
autoregressive modeling tasks (CIFAR10 and Enwik8) as well as an image
recognition task (ImageNet-1K classification). We show that AFT demonstrates
competitive performance on all the benchmarks, while providing excellent
efficiency at the same time.
@article{
title = {An Attention Free Transformer},
type = {article},
year = {2021},
websites = {https://arxiv.org/abs/2105.14103v1},
month = {5},
day = {28},
id = {a915cdb5-6e66-39e4-9ad0-0daeb2ad58d7},
created = {2021-09-01T08:03:58.653Z},
accessed = {2021-09-01},
file_attached = {true},
profile_id = {48fc0258-023d-3602-860e-824092d62c56},
group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},
last_modified = {2021-09-03T07:05:08.596Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {8d050117-e419-4b32-ad70-c875c74fa2b4},
private_publication = {false},
abstract = {We introduce Attention Free Transformer (AFT), an efficient variant of
Transformers that eliminates the need for dot product self attention. In an AFT
layer, the key and value are first combined with a set of learned position
biases, the result of which is multiplied with the query in an element-wise
fashion. This new operation has a memory complexity linear w.r.t. both the
context size and the dimension of features, making it compatible to both large
input and model sizes. We also introduce AFT-local and AFT-conv, two model
variants that take advantage of the idea of locality and spatial weight sharing
while maintaining global connectivity. We conduct extensive experiments on two
autoregressive modeling tasks (CIFAR10 and Enwik8) as well as an image
recognition task (ImageNet-1K classification). We show that AFT demonstrates
competitive performance on all the benchmarks, while providing excellent
efficiency at the same time.},
bibtype = {article},
author = {Zhai, Shuangfei and Talbott, Walter and Srivastava, Nitish and Huang, Chen and Goh, Hanlin and Zhang, Ruixiang and Susskind, Josh}
}
Downloads: 0
{"_id":"N9ohxMMWHZKzS8876","bibbaseid":"zhai-talbott-srivastava-huang-goh-zhang-susskind-anattentionfreetransformer-2021","author_short":["Zhai, S.","Talbott, W.","Srivastava, N.","Huang, C.","Goh, H.","Zhang, R.","Susskind, J."],"bibdata":{"title":"An Attention Free Transformer","type":"article","year":"2021","websites":"https://arxiv.org/abs/2105.14103v1","month":"5","day":"28","id":"a915cdb5-6e66-39e4-9ad0-0daeb2ad58d7","created":"2021-09-01T08:03:58.653Z","accessed":"2021-09-01","file_attached":"true","profile_id":"48fc0258-023d-3602-860e-824092d62c56","group_id":"1ff583c0-be37-34fa-9c04-73c69437d354","last_modified":"2021-09-03T07:05:08.596Z","read":"true","starred":false,"authored":false,"confirmed":false,"hidden":false,"folder_uuids":"8d050117-e419-4b32-ad70-c875c74fa2b4","private_publication":false,"abstract":"We introduce Attention Free Transformer (AFT), an efficient variant of\nTransformers that eliminates the need for dot product self attention. In an AFT\nlayer, the key and value are first combined with a set of learned position\nbiases, the result of which is multiplied with the query in an element-wise\nfashion. This new operation has a memory complexity linear w.r.t. both the\ncontext size and the dimension of features, making it compatible to both large\ninput and model sizes. We also introduce AFT-local and AFT-conv, two model\nvariants that take advantage of the idea of locality and spatial weight sharing\nwhile maintaining global connectivity. We conduct extensive experiments on two\nautoregressive modeling tasks (CIFAR10 and Enwik8) as well as an image\nrecognition task (ImageNet-1K classification). We show that AFT demonstrates\ncompetitive performance on all the benchmarks, while providing excellent\nefficiency at the same time.","bibtype":"article","author":"Zhai, Shuangfei and Talbott, Walter and Srivastava, Nitish and Huang, Chen and Goh, Hanlin and Zhang, Ruixiang and Susskind, Josh","bibtex":"@article{\n title = {An Attention Free Transformer},\n type = {article},\n year = {2021},\n websites = {https://arxiv.org/abs/2105.14103v1},\n month = {5},\n day = {28},\n id = {a915cdb5-6e66-39e4-9ad0-0daeb2ad58d7},\n created = {2021-09-01T08:03:58.653Z},\n accessed = {2021-09-01},\n file_attached = {true},\n profile_id = {48fc0258-023d-3602-860e-824092d62c56},\n group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},\n last_modified = {2021-09-03T07:05:08.596Z},\n read = {true},\n starred = {false},\n authored = {false},\n confirmed = {false},\n hidden = {false},\n folder_uuids = {8d050117-e419-4b32-ad70-c875c74fa2b4},\n private_publication = {false},\n abstract = {We introduce Attention Free Transformer (AFT), an efficient variant of\nTransformers that eliminates the need for dot product self attention. In an AFT\nlayer, the key and value are first combined with a set of learned position\nbiases, the result of which is multiplied with the query in an element-wise\nfashion. This new operation has a memory complexity linear w.r.t. both the\ncontext size and the dimension of features, making it compatible to both large\ninput and model sizes. We also introduce AFT-local and AFT-conv, two model\nvariants that take advantage of the idea of locality and spatial weight sharing\nwhile maintaining global connectivity. We conduct extensive experiments on two\nautoregressive modeling tasks (CIFAR10 and Enwik8) as well as an image\nrecognition task (ImageNet-1K classification). We show that AFT demonstrates\ncompetitive performance on all the benchmarks, while providing excellent\nefficiency at the same time.},\n bibtype = {article},\n author = {Zhai, Shuangfei and Talbott, Walter and Srivastava, Nitish and Huang, Chen and Goh, Hanlin and Zhang, Ruixiang and Susskind, Josh}\n}","author_short":["Zhai, S.","Talbott, W.","Srivastava, N.","Huang, C.","Goh, H.","Zhang, R.","Susskind, J."],"urls":{"Paper":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c/file/1da5e0b8-944f-cd0b-4c73-8ce43487f192/full_text.pdf.pdf","Website":"https://arxiv.org/abs/2105.14103v1"},"biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","bibbaseid":"zhai-talbott-srivastava-huang-goh-zhang-susskind-anattentionfreetransformer-2021","role":"author","metadata":{"authorlinks":{}}},"bibtype":"article","biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","dataSources":["6yXn8CtuzyEbCSr2m","ya2CyA73rpZseyrZ8","2252seNhipfTmjEBQ"],"keywords":[],"search_terms":["attention","free","transformer","zhai","talbott","srivastava","huang","goh","zhang","susskind"],"title":"An Attention Free Transformer","year":2021}