Neural Legal Judgment Prediction in English. Chalkidis, I., Androutsopoulos, I., & Aletras, N. In Korhonen, A., Traum, D., & Màrquez, L., editors, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4317–4323, Florence, Italy, July, 2019. Association for Computational Linguistics.
Neural Legal Judgment Prediction in English [link]Paper  doi  abstract   bibtex   
Legal judgment prediction is the task of automatically predicting the outcome of a court case, given a text describing the case's facts. Previous work on using neural models for this task has focused on Chinese; only feature-based models (e.g., using bags of words and topics) have been considered in English. We release a new English legal judgment prediction dataset, containing cases from the European Court of Human Rights. We evaluate a broad variety of neural models on the new dataset, establishing strong baselines that surpass previous feature-based models in three tasks: (1) binary violation classification; (2) multi-label classification; (3) case importance prediction. We also explore if models are biased towards demographic information via data anonymization. As a side-product, we propose a hierarchical version of BERT, which bypasses BERT's length limitation.
@inproceedings{chalkidisNeuralLegalJudgment2019a,
	address = {Florence, Italy},
	title = {Neural {Legal} {Judgment} {Prediction} in {English}},
	url = {https://aclanthology.org/P19-1424},
	doi = {10.18653/v1/P19-1424},
	abstract = {Legal judgment prediction is the task of automatically predicting the outcome of a court case, given a text describing the case's facts. Previous work on using neural models for this task has focused on Chinese; only feature-based models (e.g., using bags of words and topics) have been considered in English. We release a new English legal judgment prediction dataset, containing cases from the European Court of Human Rights. We evaluate a broad variety of neural models on the new dataset, establishing strong baselines that surpass previous feature-based models in three tasks: (1) binary violation classification; (2) multi-label classification; (3) case importance prediction. We also explore if models are biased towards demographic information via data anonymization. As a side-product, we propose a hierarchical version of BERT, which bypasses BERT's length limitation.},
	urldate = {2024-07-29},
	booktitle = {Proceedings of the 57th {Annual} {Meeting} of the {Association} for {Computational} {Linguistics}},
	publisher = {Association for Computational Linguistics},
	author = {Chalkidis, Ilias and Androutsopoulos, Ion and Aletras, Nikolaos},
	editor = {Korhonen, Anna and Traum, David and Màrquez, Lluís},
	month = jul,
	year = {2019},
	pages = {4317--4323},
}

Downloads: 0