Multitask Semi-Supervised Learning for Class-Imbalanced Discourse Classification. Spangher, A., May, J., Shiang, S., & Deng, L. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 498–517, Online and Punta Cana, Dominican Republic, November, 2021. Association for Computational Linguistics.
Multitask Semi-Supervised Learning for Class-Imbalanced Discourse Classification [link]Paper  abstract   bibtex   
As labeling schemas evolve over time, small differences can render datasets following older schemas unusable. This prevents researchers from building on top of previous annotation work and results in the existence, in discourse learning in particular, of many small class-imbalanced datasets. In this work, we show that a multitask learning approach can combine discourse datasets from similar and diverse domains to improve discourse classification. We show an improvement of 4.9% Micro F1-score over current state-of-the-art benchmarks on the NewsDiscourse dataset, one of the largest discourse datasets recently published, due in part to label correlations across tasks, which improve performance for underrepresented classes. We also offer an extensive review of additional techniques proposed to address resource-poor problems in NLP, and show that none of these approaches can improve classification accuracy in our setting.
@inproceedings{spangher-etal-2021-multitask,
    title = "Multitask Semi-Supervised Learning for Class-Imbalanced Discourse Classification",
    author = "Spangher, Alexander  and
      May, Jonathan  and
      Shiang, Sz-Rung  and
      Deng, Lingjia",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.emnlp-main.40",
    pages = "498--517",
    abstract = "As labeling schemas evolve over time, small differences can render datasets following older schemas unusable. This prevents researchers from building on top of previous annotation work and results in the existence, in discourse learning in particular, of many small class-imbalanced datasets. In this work, we show that a multitask learning approach can combine discourse datasets from similar and diverse domains to improve discourse classification. We show an improvement of 4.9{\%} Micro F1-score over current state-of-the-art benchmarks on the \textit{NewsDiscourse} dataset, one of the largest discourse datasets recently published, due in part to label correlations across tasks, which improve performance for underrepresented classes. We also offer an extensive review of additional techniques proposed to address resource-poor problems in NLP, and show that none of these approaches can improve classification accuracy in our setting.",
}

Downloads: 0