Character-Level Language Modeling with Deeper Self-Attention. Al-Rfou, R., Choe, D., Constant, N., Guo, M., & Jones, L. arXiv:1808.04444 [cs, stat], August, 2018. arXiv: 1808.04444
Character-Level Language Modeling with Deeper Self-Attention [link]Paper  abstract   bibtex   
LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.
@article{al-rfou_character-level_2018,
	title = {Character-{Level} {Language} {Modeling} with {Deeper} {Self}-{Attention}},
	url = {http://arxiv.org/abs/1808.04444},
	abstract = {LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.},
	urldate = {2019-10-22},
	journal = {arXiv:1808.04444 [cs, stat]},
	author = {Al-Rfou, Rami and Choe, Dokook and Constant, Noah and Guo, Mandy and Jones, Llion},
	month = aug,
	year = {2018},
	note = {arXiv: 1808.04444},
	keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Machine Learning, Statistics - Machine Learning},
}

Downloads: 0