Bottleneck Transformers for Visual Recognition. Srinivas, A., Lin, T., Parmar, N., Shlens, J., Abbeel, P., & Vaswani, A. 1, 2021.
Paper
Website abstract bibtex We present BoTNet, a conceptually simple yet powerful backbone architecture
that incorporates self-attention for multiple computer vision tasks including
image classification, object detection and instance segmentation. By just
replacing the spatial convolutions with global self-attention in the final
three bottleneck blocks of a ResNet and no other changes, our approach improves
upon the baselines significantly on instance segmentation and object detection
while also reducing the parameters, with minimal overhead in latency. Through
the design of BoTNet, we also point out how ResNet bottleneck blocks with
self-attention can be viewed as Transformer blocks. Without any bells and
whistles, BoTNet achieves 44.4% Mask AP and 49.7% Box AP on the COCO Instance
Segmentation benchmark using the Mask R-CNN framework; surpassing the previous
best published single model and single scale results of ResNeSt evaluated on
the COCO validation set. Finally, we present a simple adaptation of the BoTNet
design for image classification, resulting in models that achieve a strong
performance of 84.7% top-1 accuracy on the ImageNet benchmark while being up to
1.64x faster in compute time than the popular EfficientNet models on TPU-v3
hardware. We hope our simple and effective approach will serve as a strong
baseline for future research in self-attention models for vision
@article{
title = {Bottleneck Transformers for Visual Recognition},
type = {article},
year = {2021},
websites = {https://arxiv.org/abs/2101.11605v2},
month = {1},
day = {27},
id = {95eb8f27-2c19-34e3-8a5b-9462be413192},
created = {2021-09-01T07:34:53.797Z},
accessed = {2021-09-01},
file_attached = {true},
profile_id = {48fc0258-023d-3602-860e-824092d62c56},
group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},
last_modified = {2021-09-01T07:34:56.861Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {8d050117-e419-4b32-ad70-c875c74fa2b4},
private_publication = {false},
abstract = {We present BoTNet, a conceptually simple yet powerful backbone architecture
that incorporates self-attention for multiple computer vision tasks including
image classification, object detection and instance segmentation. By just
replacing the spatial convolutions with global self-attention in the final
three bottleneck blocks of a ResNet and no other changes, our approach improves
upon the baselines significantly on instance segmentation and object detection
while also reducing the parameters, with minimal overhead in latency. Through
the design of BoTNet, we also point out how ResNet bottleneck blocks with
self-attention can be viewed as Transformer blocks. Without any bells and
whistles, BoTNet achieves 44.4% Mask AP and 49.7% Box AP on the COCO Instance
Segmentation benchmark using the Mask R-CNN framework; surpassing the previous
best published single model and single scale results of ResNeSt evaluated on
the COCO validation set. Finally, we present a simple adaptation of the BoTNet
design for image classification, resulting in models that achieve a strong
performance of 84.7% top-1 accuracy on the ImageNet benchmark while being up to
1.64x faster in compute time than the popular EfficientNet models on TPU-v3
hardware. We hope our simple and effective approach will serve as a strong
baseline for future research in self-attention models for vision},
bibtype = {article},
author = {Srinivas, Aravind and Lin, Tsung-Yi and Parmar, Niki and Shlens, Jonathon and Abbeel, Pieter and Vaswani, Ashish}
}
Downloads: 0
{"_id":"jk8ThFyjJBDvpbTG6","bibbaseid":"srinivas-lin-parmar-shlens-abbeel-vaswani-bottlenecktransformersforvisualrecognition-2021","author_short":["Srinivas, A.","Lin, T.","Parmar, N.","Shlens, J.","Abbeel, P.","Vaswani, A."],"bibdata":{"title":"Bottleneck Transformers for Visual Recognition","type":"article","year":"2021","websites":"https://arxiv.org/abs/2101.11605v2","month":"1","day":"27","id":"95eb8f27-2c19-34e3-8a5b-9462be413192","created":"2021-09-01T07:34:53.797Z","accessed":"2021-09-01","file_attached":"true","profile_id":"48fc0258-023d-3602-860e-824092d62c56","group_id":"1ff583c0-be37-34fa-9c04-73c69437d354","last_modified":"2021-09-01T07:34:56.861Z","read":false,"starred":false,"authored":false,"confirmed":false,"hidden":false,"folder_uuids":"8d050117-e419-4b32-ad70-c875c74fa2b4","private_publication":false,"abstract":"We present BoTNet, a conceptually simple yet powerful backbone architecture\nthat incorporates self-attention for multiple computer vision tasks including\nimage classification, object detection and instance segmentation. By just\nreplacing the spatial convolutions with global self-attention in the final\nthree bottleneck blocks of a ResNet and no other changes, our approach improves\nupon the baselines significantly on instance segmentation and object detection\nwhile also reducing the parameters, with minimal overhead in latency. Through\nthe design of BoTNet, we also point out how ResNet bottleneck blocks with\nself-attention can be viewed as Transformer blocks. Without any bells and\nwhistles, BoTNet achieves 44.4% Mask AP and 49.7% Box AP on the COCO Instance\nSegmentation benchmark using the Mask R-CNN framework; surpassing the previous\nbest published single model and single scale results of ResNeSt evaluated on\nthe COCO validation set. Finally, we present a simple adaptation of the BoTNet\ndesign for image classification, resulting in models that achieve a strong\nperformance of 84.7% top-1 accuracy on the ImageNet benchmark while being up to\n1.64x faster in compute time than the popular EfficientNet models on TPU-v3\nhardware. We hope our simple and effective approach will serve as a strong\nbaseline for future research in self-attention models for vision","bibtype":"article","author":"Srinivas, Aravind and Lin, Tsung-Yi and Parmar, Niki and Shlens, Jonathon and Abbeel, Pieter and Vaswani, Ashish","bibtex":"@article{\n title = {Bottleneck Transformers for Visual Recognition},\n type = {article},\n year = {2021},\n websites = {https://arxiv.org/abs/2101.11605v2},\n month = {1},\n day = {27},\n id = {95eb8f27-2c19-34e3-8a5b-9462be413192},\n created = {2021-09-01T07:34:53.797Z},\n accessed = {2021-09-01},\n file_attached = {true},\n profile_id = {48fc0258-023d-3602-860e-824092d62c56},\n group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},\n last_modified = {2021-09-01T07:34:56.861Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {false},\n hidden = {false},\n folder_uuids = {8d050117-e419-4b32-ad70-c875c74fa2b4},\n private_publication = {false},\n abstract = {We present BoTNet, a conceptually simple yet powerful backbone architecture\nthat incorporates self-attention for multiple computer vision tasks including\nimage classification, object detection and instance segmentation. By just\nreplacing the spatial convolutions with global self-attention in the final\nthree bottleneck blocks of a ResNet and no other changes, our approach improves\nupon the baselines significantly on instance segmentation and object detection\nwhile also reducing the parameters, with minimal overhead in latency. Through\nthe design of BoTNet, we also point out how ResNet bottleneck blocks with\nself-attention can be viewed as Transformer blocks. Without any bells and\nwhistles, BoTNet achieves 44.4% Mask AP and 49.7% Box AP on the COCO Instance\nSegmentation benchmark using the Mask R-CNN framework; surpassing the previous\nbest published single model and single scale results of ResNeSt evaluated on\nthe COCO validation set. Finally, we present a simple adaptation of the BoTNet\ndesign for image classification, resulting in models that achieve a strong\nperformance of 84.7% top-1 accuracy on the ImageNet benchmark while being up to\n1.64x faster in compute time than the popular EfficientNet models on TPU-v3\nhardware. We hope our simple and effective approach will serve as a strong\nbaseline for future research in self-attention models for vision},\n bibtype = {article},\n author = {Srinivas, Aravind and Lin, Tsung-Yi and Parmar, Niki and Shlens, Jonathon and Abbeel, Pieter and Vaswani, Ashish}\n}","author_short":["Srinivas, A.","Lin, T.","Parmar, N.","Shlens, J.","Abbeel, P.","Vaswani, A."],"urls":{"Paper":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c/file/149acb2f-2246-fb8b-0d32-90c5b7c1fe6e/full_text.pdf.pdf","Website":"https://arxiv.org/abs/2101.11605v2"},"biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","bibbaseid":"srinivas-lin-parmar-shlens-abbeel-vaswani-bottlenecktransformersforvisualrecognition-2021","role":"author","metadata":{"authorlinks":{}}},"bibtype":"article","biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","dataSources":["igXWS7EdKxb8weRwm","2252seNhipfTmjEBQ"],"keywords":[],"search_terms":["bottleneck","transformers","visual","recognition","srinivas","lin","parmar","shlens","abbeel","vaswani"],"title":"Bottleneck Transformers for Visual Recognition","year":2021}