Multilingual Twitter Sentiment Classification: The Role of Human Annotators. Mozetič, I., Grčar, M., & Smailovi'c, J. PLoS ONE, 11(5):e0155036, 2016.
Multilingual Twitter Sentiment Classification: The Role of Human Annotators [link]Paper  doi  abstract   bibtex   
What are the limits of automated Twitter sentiment classification? We analyze a large set of manually labeled tweets in different languages, use them as training data, and construct automated classification models. It turns out that the quality of classification models depends much more on the quality and size of training data than on the type of the model trained. Experimental results indicate that there is no statistically significant difference between the performance of the top classification models. We quantify the quality of training data by applying various annotator agreement measures, and identify the weakest points of different datasets. We show that the model performance approaches the inter-annotator agreement when the size of the training set is sufficiently large. However, it is crucial to regularly monitor the self-and inter-annotator agreements since this improves the training datasets and con-sequently the model performance. Finally, we show that there is strong evidence that humans perceive the sentiment classes (negative, neutral, and positive) as ordered.
@article{Mozetic2016,
abstract = {What are the limits of automated Twitter sentiment classification? We analyze a large set of manually labeled tweets in different languages, use them as training data, and construct automated classification models. It turns out that the quality of classification models depends much more on the quality and size of training data than on the type of the model trained. Experimental results indicate that there is no statistically significant difference between the performance of the top classification models. We quantify the quality of training data by applying various annotator agreement measures, and identify the weakest points of different datasets. We show that the model performance approaches the inter-annotator agreement when the size of the training set is sufficiently large. However, it is crucial to regularly monitor the self-and inter-annotator agreements since this improves the training datasets and con-sequently the model performance. Finally, we show that there is strong evidence that humans perceive the sentiment classes (negative, neutral, and positive) as ordered.},
author = {Mozeti{\v{c}}, Igor and Gr{\v{c}}ar, Miha and Smailovi{\'{c}}, Jasmina},
doi = {10.1371/journal.pone.0155036},
file = {::},
journal = {PLoS ONE},
keywords = {DOLFINS{\_}T2.3,DOLFINS{\_}T3.1,DOLFINS{\_}WP2,DOLFINS{\_}WP3},
mendeley-tags = {DOLFINS{\_}T2.3,DOLFINS{\_}T3.1,DOLFINS{\_}WP2,DOLFINS{\_}WP3},
number = {5},
pages = {e0155036},
title = {{Multilingual Twitter Sentiment Classification: The Role of Human Annotators}},
url = {http://dx.doi.org/10.1371/journal.pone.0155036},
volume = {11},
year = {2016}
}

Downloads: 0