ODD: A Benchmark Dataset for the Natural Language Processing based Opioid Related Aberrant Behavior Detection. Kwon, S., Wang, X., Liu, W., Druhl, E., Sung, M. L., Reisman, J. I., Li, W., Kerns, R. D., Becker, W., & Yu, H. In June, 2024. arXiv. Number: arXiv:2307.02591 arXiv:2307.02591 [cs]
ODD: A Benchmark Dataset for the Natural Language Processing based Opioid Related Aberrant Behavior Detection [link]Paper  doi  abstract   bibtex   
Opioid related aberrant behaviors (ORABs) present novel risk factors for opioid overdose. This paper introduces a novel biomedical natural language processing benchmark dataset named ODD, for ORAB Detection Dataset. ODD is an expert-annotated dataset designed to identify ORABs from patients' EHR notes and classify them into nine categories; 1) Confirmed Aberrant Behavior, 2) Suggested Aberrant Behavior, 3) Opioids, 4) Indication, 5) Diagnosed opioid dependency, 6) Benzodiazepines, 7) Medication Changes, 8) Central Nervous System-related, and 9) Social Determinants of Health. We explored two state-of-the-art natural language processing models (fine-tuning and prompt-tuning approaches) to identify ORAB. Experimental results show that the prompt-tuning models outperformed the fine-tuning models in most categories and the gains were especially higher among uncommon categories (Suggested Aberrant Behavior, Confirmed Aberrant Behaviors, Diagnosed Opioid Dependence, and Medication Change). Although the best model achieved the highest 88.17% on macro average area under precision recall curve, uncommon classes still have a large room for performance improvement. ODD is publicly available.
@inproceedings{kwon_odd_2024,
	title = {{ODD}: {A} {Benchmark} {Dataset} for the {Natural} {Language} {Processing} based {Opioid} {Related} {Aberrant} {Behavior} {Detection}},
	shorttitle = {{ODD}},
	url = {http://arxiv.org/abs/2307.02591},
	doi = {10.48550/arXiv.2307.02591},
	abstract = {Opioid related aberrant behaviors (ORABs) present novel risk factors for opioid overdose. This paper introduces a novel biomedical natural language processing benchmark dataset named ODD, for ORAB Detection Dataset. ODD is an expert-annotated dataset designed to identify ORABs from patients' EHR notes and classify them into nine categories; 1) Confirmed Aberrant Behavior, 2) Suggested Aberrant Behavior, 3) Opioids, 4) Indication, 5) Diagnosed opioid dependency, 6) Benzodiazepines, 7) Medication Changes, 8) Central Nervous System-related, and 9) Social Determinants of Health. We explored two state-of-the-art natural language processing models (fine-tuning and prompt-tuning approaches) to identify ORAB. Experimental results show that the prompt-tuning models outperformed the fine-tuning models in most categories and the gains were especially higher among uncommon categories (Suggested Aberrant Behavior, Confirmed Aberrant Behaviors, Diagnosed Opioid Dependence, and Medication Change). Although the best model achieved the highest 88.17\% on macro average area under precision recall curve, uncommon classes still have a large room for performance improvement. ODD is publicly available.},
	urldate = {2024-05-21},
	publisher = {arXiv},
	author = {Kwon, Sunjae and Wang, Xun and Liu, Weisong and Druhl, Emily and Sung, Minhee L. and Reisman, Joel I. and Li, Wenjun and Kerns, Robert D. and Becker, William and Yu, Hong},
	month = jun,
	year = {2024},
	note = {Number: arXiv:2307.02591
arXiv:2307.02591 [cs]},
	keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language},
}

Downloads: 0