Distributional Semantics for Neo-Latin. Bloem, J., Parisi, M. C., Reynaert, M., Oortwijn, Y., & Betti, A. In Proceedings of LT4HALA 2020 - 1st Workshop on Language Technologies for Historical and Ancient Languages, pages 84–93, Marseille, 2020. European Language Resources Association (ELRA).
Distributional Semantics for Neo-Latin [link]Paper  abstract   bibtex   4 downloads  
We address the problem of creating and evaluating quality Neo-Latin word embeddings for the purpose of philosophical research, adapting the Nonce2Vec tool to learn embeddings from Neo-Latin sentences. This distributional semantic modeling tool can learn from tiny data incrementally, using a larger background corpus for initialization. We conduct two evaluation tasks: definitional learning of Latin Wikipedia terms, and learning consistent embeddings from 18th century Neo-Latin sentences pertaining to the concept of mathematical method. Our results show that consistent Neo-Latin word embeddings can be learned from this type of data. While our evaluation results are promising, they do not reveal to what extent the learned models match domain expert knowledge of our Neo-Latin texts. Therefore, we propose an additional evaluation method, grounded in expert-annotated data, that would assess whether learned representations are conceptually sound in relation to the domain of study.
@inproceedings{bloem_distributional_2020,
	address = {Marseille},
	title = {Distributional {Semantics} for {Neo}-{Latin}},
	url = {https://www.aclweb.org/anthology/2020.lt4hala-1.13},
	abstract = {We address the problem of creating and evaluating quality Neo-Latin word embeddings for the purpose of philosophical research, adapting the Nonce2Vec tool to learn embeddings from Neo-Latin sentences. This distributional semantic modeling tool can learn from tiny data incrementally, using a larger background corpus for initialization. We conduct two evaluation tasks: definitional learning of Latin Wikipedia terms, and learning consistent embeddings from 18th century Neo-Latin sentences pertaining to the concept of mathematical method. Our results show that consistent Neo-Latin word embeddings can be learned from this type of data. While our evaluation results are promising, they do not reveal to what extent the learned models match domain expert knowledge of our Neo-Latin texts. Therefore, we propose an additional evaluation method, grounded in expert-annotated data, that would assess whether learned representations are conceptually sound in relation to the domain of study.},
	booktitle = {Proceedings of {LT4HALA} 2020 - 1st {Workshop} on {Language} {Technologies} for {Historical} and {Ancient} {Languages}},
	publisher = {European Language Resources Association (ELRA)},
	author = {Bloem, Jelke and Parisi, Maria Chiara and Reynaert, Martin and Oortwijn, Yvette and Betti, Arianna},
	year = {2020},
	pages = {84--93},
}

Downloads: 4