DACT-BERT: Differentiable Adaptive Computation Time for an Efficient BERT Inference. Eyzaguirre, C., Rio, F. D., Araujo, V., & Soto, A. ArXiv, 2021. Paper abstract bibtex Large-scale pre-trained language models have shown remarkable results in diverse NLP applications. Unfortunately, these performance gains have been accompanied by a significant increase in computation time and model size, stressing the need to develop new or complementary strategies to increase the efficiency of these models. In this paper we propose DACTBERT, a differentiable adaptive computation time strategy for BERT-like models. DACTBERT adds an adaptive computational mechanism to BERT’s regular processing pipeline, which controls the number of Transformer blocks that need to be executed at inference time. By doing this, the model learns to combine the most appropriate intermediate representations for the task at hand. Our experiments demonstrate that our approach, when compared to the baselines, excels on a reduced computational regime and is competitive in other less restrictive ones.
@article{EyzaguirreEtAl:DACTBERT:2021,
title={DACT-BERT: Differentiable Adaptive Computation Time for an Efficient BERT Inference},
author={C. Eyzaguirre and F. Del Rio and V. Araujo and A. Soto},
journal={ArXiv},
year={2021},
volume={abs/2109.11745},
abstract = {Large-scale pre-trained language models have shown remarkable results in diverse NLP applications. Unfortunately, these performance gains have been accompanied by a significant
increase in computation time and model size, stressing the need to develop new or complementary strategies to increase the efficiency of these models. In this paper we propose DACTBERT, a differentiable adaptive computation time strategy for BERT-like models. DACTBERT adds an adaptive computational mechanism to BERT’s regular processing pipeline, which controls the number of Transformer blocks that need to be executed at inference time. By doing this, the model learns to combine the most appropriate intermediate representations for the task at hand. Our experiments demonstrate that our approach, when compared to the baselines, excels on a reduced
computational regime and is competitive in other less restrictive ones.},
url={https://arxiv.org/pdf/2109.11745.pdf}
}
%***********2020***************%
Downloads: 0
{"_id":"o2ESA49ygBPin4e9n","bibbaseid":"eyzaguirre-rio-araujo-soto-dactbertdifferentiableadaptivecomputationtimeforanefficientbertinference-2021","author_short":["Eyzaguirre, C.","Rio, F. D.","Araujo, V.","Soto, A."],"bibdata":{"bibtype":"article","type":"article","title":"DACT-BERT: Differentiable Adaptive Computation Time for an Efficient BERT Inference","author":[{"firstnames":["C."],"propositions":[],"lastnames":["Eyzaguirre"],"suffixes":[]},{"firstnames":["F.","Del"],"propositions":[],"lastnames":["Rio"],"suffixes":[]},{"firstnames":["V."],"propositions":[],"lastnames":["Araujo"],"suffixes":[]},{"firstnames":["A."],"propositions":[],"lastnames":["Soto"],"suffixes":[]}],"journal":"ArXiv","year":"2021","volume":"abs/2109.11745","abstract":"Large-scale pre-trained language models have shown remarkable results in diverse NLP applications. Unfortunately, these performance gains have been accompanied by a significant increase in computation time and model size, stressing the need to develop new or complementary strategies to increase the efficiency of these models. In this paper we propose DACTBERT, a differentiable adaptive computation time strategy for BERT-like models. DACTBERT adds an adaptive computational mechanism to BERT’s regular processing pipeline, which controls the number of Transformer blocks that need to be executed at inference time. By doing this, the model learns to combine the most appropriate intermediate representations for the task at hand. Our experiments demonstrate that our approach, when compared to the baselines, excels on a reduced computational regime and is competitive in other less restrictive ones.","url":"https://arxiv.org/pdf/2109.11745.pdf","bibtex":"@article{EyzaguirreEtAl:DACTBERT:2021,\n title={DACT-BERT: Differentiable Adaptive Computation Time for an Efficient BERT Inference},\n author={C. Eyzaguirre and F. Del Rio and V. Araujo and A. Soto},\n journal={ArXiv},\n year={2021},\n volume={abs/2109.11745},\n abstract = {Large-scale pre-trained language models have shown remarkable results in diverse NLP applications. Unfortunately, these performance gains have been accompanied by a significant\nincrease in computation time and model size, stressing the need to develop new or complementary strategies to increase the efficiency of these models. In this paper we propose DACTBERT, a differentiable adaptive computation time strategy for BERT-like models. DACTBERT adds an adaptive computational mechanism to BERT’s regular processing pipeline, which controls the number of Transformer blocks that need to be executed at inference time. By doing this, the model learns to combine the most appropriate intermediate representations for the task at hand. Our experiments demonstrate that our approach, when compared to the baselines, excels on a reduced\ncomputational regime and is competitive in other less restrictive ones.},\nurl={https://arxiv.org/pdf/2109.11745.pdf}\n}\n\n%***********2020***************%\n\n","author_short":["Eyzaguirre, C.","Rio, F. D.","Araujo, V.","Soto, A."],"key":"EyzaguirreEtAl:DACTBERT:2021","id":"EyzaguirreEtAl:DACTBERT:2021","bibbaseid":"eyzaguirre-rio-araujo-soto-dactbertdifferentiableadaptivecomputationtimeforanefficientbertinference-2021","role":"author","urls":{"Paper":"https://arxiv.org/pdf/2109.11745.pdf"},"metadata":{"authorlinks":{}},"html":""},"bibtype":"article","biburl":"https://asoto.ing.puc.cl/AlvaroPapers.bib","dataSources":["3YPRCmmijLqF4qHXd","QjT2DEZoWmQYxjHXS"],"keywords":[],"search_terms":["dact","bert","differentiable","adaptive","computation","time","efficient","bert","inference","eyzaguirre","rio","araujo","soto"],"title":"DACT-BERT: Differentiable Adaptive Computation Time for an Efficient BERT Inference","year":2021}