Deduplication of Scholarly Documents using Locality Sensitive Hashing and Word Embeddings. Gyawali, B., Anastasiou, L., & Knoth, P. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 901–910, Marseille, France, 2020. European Language Resources Association.
Deduplication of Scholarly Documents using Locality Sensitive Hashing and Word Embeddings [link]Paper  abstract   bibtex   
Deduplication is the task of identifying near and exact duplicate data items in a collection. In this paper, we present a novel method for deduplication of scholarly documents. We develop a hybrid model which uses structural similarity (locality sensitive hashing) and meaning representation (word embeddings) of document texts to determine (near) duplicates. Our collection constitutes a subset of multidisciplinary scholarly documents aggregated from research repositories. We identify several issues causing data inaccuracies in such collections and motivate the need for deduplication. In lack of existing dataset suitable for study of deduplication of scholarly documents, we create a ground truth dataset of $100K$ scholarly documents and conduct a series of experiments to empirically establish optimal values for the parameters of our deduplication method. Experimental evaluation shows that our method achieves a macro F1-score of 0.90. We productionise our method as a publicly accessible web API service serving deduplication of scholarly documents in real time.
@inproceedings{gyawali_deduplication_2020,
	address = {Marseille, France},
	title = {Deduplication of {Scholarly} {Documents} using {Locality} {Sensitive} {Hashing} and {Word} {Embeddings}},
	isbn = {979-10-95546-34-4},
	url = {https://aclanthology.org/2020.lrec-1.113},
	abstract = {Deduplication is the task of identifying near and exact duplicate data items in a collection. In this paper, we present a novel method for deduplication of scholarly documents. We develop a hybrid model which uses structural similarity (locality sensitive hashing) and meaning representation (word embeddings) of document texts to determine (near) duplicates. Our collection constitutes a subset of multidisciplinary scholarly documents aggregated from research repositories. We identify several issues causing data inaccuracies in such collections and motivate the need for deduplication. In lack of existing dataset suitable for study of deduplication of scholarly documents, we create a ground truth dataset of \$100K\$ scholarly documents and conduct a series of experiments to empirically establish optimal values for the parameters of our deduplication method. Experimental evaluation shows that our method achieves a macro F1-score of 0.90. We productionise our method as a publicly accessible web API service serving deduplication of scholarly documents in real time.},
	language = {English},
	booktitle = {Proceedings of the 12th {Language} {Resources} and {Evaluation} {Conference}},
	publisher = {European Language Resources Association},
	author = {Gyawali, Bikash and Anastasiou, Lucas and Knoth, Petr},
	year = {2020},
	pages = {901--910},
}

Downloads: 0