Evaluating Sequence-to-Sequence Models for Handwritten Text Recognition. Michael, J., Labahn, R., Grüning, T., & Zöllner, J. arXiv:1903.07377 [cs], July, 2019. arXiv: 1903.07377
Evaluating Sequence-to-Sequence Models for Handwritten Text Recognition [link]Paper  abstract   bibtex   
Encoder-decoder models have become an effective approach for sequence learning tasks like machine translation, image captioning and speech recognition, but have yet to show competitive results for handwritten text recognition. To this end, we propose an attention-based sequence-to-sequence model. It combines a convolutional neural network as a generic feature extractor with a recurrent neural network to encode both the visual information, as well as the temporal context between characters in the input image, and uses a separate recurrent neural network to decode the actual character sequence. We make experimental comparisons between various attention mechanisms and positional encodings, in order to find an appropriate alignment between the input and output sequence. The model can be trained end-to-end and the optional integration of a hybrid loss allows the encoder to retain an interpretable and usable output, if desired. We achieve competitive results on the IAM and ICFHR2016 READ data sets compared to the state-of-the-art without the use of a language model, and we significantly improve over any recent sequence-to-sequence approaches.
@article{michael_evaluating_2019,
	title = {Evaluating {Sequence}-to-{Sequence} {Models} for {Handwritten} {Text} {Recognition}},
	url = {http://arxiv.org/abs/1903.07377},
	abstract = {Encoder-decoder models have become an effective approach for sequence learning tasks like machine translation, image captioning and speech recognition, but have yet to show competitive results for handwritten text recognition. To this end, we propose an attention-based sequence-to-sequence model. It combines a convolutional neural network as a generic feature extractor with a recurrent neural network to encode both the visual information, as well as the temporal context between characters in the input image, and uses a separate recurrent neural network to decode the actual character sequence. We make experimental comparisons between various attention mechanisms and positional encodings, in order to find an appropriate alignment between the input and output sequence. The model can be trained end-to-end and the optional integration of a hybrid loss allows the encoder to retain an interpretable and usable output, if desired. We achieve competitive results on the IAM and ICFHR2016 READ data sets compared to the state-of-the-art without the use of a language model, and we significantly improve over any recent sequence-to-sequence approaches.},
	urldate = {2022-01-07},
	journal = {arXiv:1903.07377 [cs]},
	author = {Michael, Johannes and Labahn, Roger and Grüning, Tobias and Zöllner, Jochen},
	month = jul,
	year = {2019},
	note = {arXiv: 1903.07377},
	keywords = {Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning},
}

Downloads: 0