Opinion Mining with Deep Recurrent Neural Networks. Irsoy, O. & Cardie, C. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 720–728, Doha, Qatar, 2014. Association for Computational Linguistics.
Opinion Mining with Deep Recurrent Neural Networks [link]Paper  doi  abstract   bibtex   
Recurrent neural networks (RNNs) are connectionist models of sequential data that are naturally applicable to the analysis of natural language. Recently, “depth in space” — as an orthogonal notion to “depth in time” — in RNNs has been investigated by stacking multiple layers of RNNs and shown empirically to bring a temporal hierarchy to the architecture. In this work we apply these deep RNNs to the task of opinion expression extraction formulated as a token-level sequence-labeling task. Experimental results show that deep, narrow RNNs outperform traditional shallow, wide RNNs with the same number of parameters. Furthermore, our approach outperforms previous CRF-based baselines, including the state-of-the-art semi-Markov CRF model, and does so without access to the powerful opinion lexicons and syntactic features relied upon by the semi-CRF, as well as without the standard layer-by-layer pre-training typically required of RNN architectures.
@inproceedings{irsoy_opinion_2014,
	address = {Doha, Qatar},
	title = {Opinion {Mining} with {Deep} {Recurrent} {Neural} {Networks}},
	url = {http://aclweb.org/anthology/D14-1080},
	doi = {10.3115/v1/D14-1080},
	abstract = {Recurrent neural networks (RNNs) are connectionist models of sequential data that are naturally applicable to the analysis of natural language. Recently, “depth in space” — as an orthogonal notion to “depth in time” — in RNNs has been investigated by stacking multiple layers of RNNs and shown empirically to bring a temporal hierarchy to the architecture. In this work we apply these deep RNNs to the task of opinion expression extraction formulated as a token-level sequence-labeling task. Experimental results show that deep, narrow RNNs outperform traditional shallow, wide RNNs with the same number of parameters. Furthermore, our approach outperforms previous CRF-based baselines, including the state-of-the-art semi-Markov CRF model, and does so without access to the powerful opinion lexicons and syntactic features relied upon by the semi-CRF, as well as without the standard layer-by-layer pre-training typically required of RNN architectures.},
	language = {en},
	urldate = {2024-01-23},
	booktitle = {Proceedings of the 2014 {Conference} on {Empirical} {Methods} in {Natural} {Language} {Processing} ({EMNLP})},
	publisher = {Association for Computational Linguistics},
	author = {Irsoy, Ozan and Cardie, Claire},
	year = {2014},
	pages = {720--728},
}

Downloads: 0