BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Devlin, J., Chang, M., Lee, K., & Toutanova, K. arXiv:1810.04805 [cs], 5, 2019.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding [link]Website  abstract   bibtex   
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5\% (7.7\% point absolute improvement), MultiNLI accuracy to 86.7\% (4.6\% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
@article{
 title = {BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding},
 type = {article},
 year = {2019},
 keywords = {Computer Science - Computation and Language},
 websites = {http://arxiv.org/abs/1810.04805},
 month = {5},
 id = {a0eb3207-90f4-357d-be04-93b4cc4481d4},
 created = {2022-03-28T09:45:01.966Z},
 accessed = {2022-03-27},
 file_attached = {true},
 profile_id = {235249c2-3ed4-314a-b309-b1ea0330f5d9},
 group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},
 last_modified = {2022-03-29T08:03:04.316Z},
 read = {false},
 starred = {false},
 authored = {false},
 confirmed = {true},
 hidden = {false},
 citation_key = {devlinBERTPretrainingDeep2019},
 source_type = {article},
 short_title = {BERT},
 notes = {arXiv: 1810.04805},
 private_publication = {false},
 abstract = {We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5\% (7.7\% point absolute improvement), MultiNLI accuracy to 86.7\% (4.6\% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).},
 bibtype = {article},
 author = {Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
 journal = {arXiv:1810.04805 [cs]}
}

Downloads: 0