MaskOCR: Text Recognition with Masked Encoder-Decoder Pretraining. Lyu, P., Zhang, C., Liu, S., Qiao, M., Xu, Y., Wu, L., Yao, K., Han, J., Ding, E., & Wang, J. 2022. Publisher: arXiv Version Number: 1
MaskOCR: Text Recognition with Masked Encoder-Decoder Pretraining [link]Paper  doi  abstract   bibtex   
In this paper, we present a model pretraining technique, named MaskOCR, for text recognition. Our text recognition architecture is an encoder-decoder transformer: the encoder extracts the patch-level representations, and the decoder recognizes the text from the representations. Our approach pretrains both the encoder and the decoder in a sequential manner. (i) We pretrain the encoder in a self-supervised manner over a large set of unlabeled real text images. We adopt the masked image modeling approach, which shows the effectiveness for general images, expecting that the representations take on semantics. (ii) We pretrain the decoder over a large set of synthesized text images in a supervised manner and enhance the language modeling capability of the decoder by randomly masking some text image patches occupied by characters input to the encoder and accordingly the representations input to the decoder. Experiments show that the proposed MaskOCR approach achieves superior results on the benchmark datasets, including Chinese and English text images.
@misc{lyu_maskocr_2022,
	title = {{MaskOCR}: {Text} {Recognition} with {Masked} {Encoder}-{Decoder} {Pretraining}},
	copyright = {arXiv.org perpetual, non-exclusive license},
	shorttitle = {{MaskOCR}},
	url = {https://arxiv.org/abs/2206.00311},
	doi = {10.48550/ARXIV.2206.00311},
	abstract = {In this paper, we present a model pretraining technique, named MaskOCR, for text recognition. Our text recognition architecture is an encoder-decoder transformer: the encoder extracts the patch-level representations, and the decoder recognizes the text from the representations. Our approach pretrains both the encoder and the decoder in a sequential manner. (i) We pretrain the encoder in a self-supervised manner over a large set of unlabeled real text images. We adopt the masked image modeling approach, which shows the effectiveness for general images, expecting that the representations take on semantics. (ii) We pretrain the decoder over a large set of synthesized text images in a supervised manner and enhance the language modeling capability of the decoder by randomly masking some text image patches occupied by characters input to the encoder and accordingly the representations input to the decoder. Experiments show that the proposed MaskOCR approach achieves superior results on the benchmark datasets, including Chinese and English text images.},
	urldate = {2023-05-11},
	author = {Lyu, Pengyuan and Zhang, Chengquan and Liu, Shanshan and Qiao, Meina and Xu, Yangliu and Wu, Liang and Yao, Kun and Han, Junyu and Ding, Errui and Wang, Jingdong},
	year = {2022},
	note = {Publisher: arXiv
Version Number: 1},
	keywords = {\#nosource, Computer Science - Computer Vision and Pattern Recognition, Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences},
}

Downloads: 0