Evaluation Ethics of LLMs in Legal Domain. Zhang, R., Li, H., Wu, Y., Ai, Q., Liu, Y., Zhang, M., & Ma, S. March, 2024. arXiv:2403.11152 [cs]Paper doi abstract bibtex 2 downloads In recent years, the utilization of large language models for natural language dialogue has gained momentum, leading to their widespread adoption across various domains. However, their universal competence in addressing challenges specific to specialized fields such as law remains a subject of scrutiny. The incorporation of legal ethics into the model has been overlooked by researchers. We asserts that rigorous ethic evaluation is essential to ensure the effective integration of large language models in legal domains, emphasizing the need to assess domain-specific proficiency and domain-specific ethic. To address this, we propose a novelty evaluation methodology, utilizing authentic legal cases to evaluate the fundamental language abilities, specialized legal knowledge and legal robustness of large language models (LLMs). The findings from our comprehensive evaluation contribute significantly to the academic discourse surrounding the suitability and performance of large language models in legal domains.
@misc{zhangEvaluationEthicsLLMs2024,
title = {Evaluation {Ethics} of {LLMs} in {Legal} {Domain}},
url = {http://arxiv.org/abs/2403.11152},
doi = {10.48550/arXiv.2403.11152},
abstract = {In recent years, the utilization of large language models for natural language dialogue has gained momentum, leading to their widespread adoption across various domains. However, their universal competence in addressing challenges specific to specialized fields such as law remains a subject of scrutiny. The incorporation of legal ethics into the model has been overlooked by researchers. We asserts that rigorous ethic evaluation is essential to ensure the effective integration of large language models in legal domains, emphasizing the need to assess domain-specific proficiency and domain-specific ethic. To address this, we propose a novelty evaluation methodology, utilizing authentic legal cases to evaluate the fundamental language abilities, specialized legal knowledge and legal robustness of large language models (LLMs). The findings from our comprehensive evaluation contribute significantly to the academic discourse surrounding the suitability and performance of large language models in legal domains.},
urldate = {2024-07-28},
publisher = {arXiv},
author = {Zhang, Ruizhe and Li, Haitao and Wu, Yueyue and Ai, Qingyao and Liu, Yiqun and Zhang, Min and Ma, Shaoping},
month = mar,
year = {2024},
note = {arXiv:2403.11152 [cs]},
keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language},
}
Downloads: 2
{"_id":"cEev5o4u7peqk8Pf4","bibbaseid":"zhang-li-wu-ai-liu-zhang-ma-evaluationethicsofllmsinlegaldomain-2024","author_short":["Zhang, R.","Li, H.","Wu, Y.","Ai, Q.","Liu, Y.","Zhang, M.","Ma, S."],"bibdata":{"bibtype":"misc","type":"misc","title":"Evaluation Ethics of LLMs in Legal Domain","url":"http://arxiv.org/abs/2403.11152","doi":"10.48550/arXiv.2403.11152","abstract":"In recent years, the utilization of large language models for natural language dialogue has gained momentum, leading to their widespread adoption across various domains. However, their universal competence in addressing challenges specific to specialized fields such as law remains a subject of scrutiny. The incorporation of legal ethics into the model has been overlooked by researchers. We asserts that rigorous ethic evaluation is essential to ensure the effective integration of large language models in legal domains, emphasizing the need to assess domain-specific proficiency and domain-specific ethic. To address this, we propose a novelty evaluation methodology, utilizing authentic legal cases to evaluate the fundamental language abilities, specialized legal knowledge and legal robustness of large language models (LLMs). The findings from our comprehensive evaluation contribute significantly to the academic discourse surrounding the suitability and performance of large language models in legal domains.","urldate":"2024-07-28","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Zhang"],"firstnames":["Ruizhe"],"suffixes":[]},{"propositions":[],"lastnames":["Li"],"firstnames":["Haitao"],"suffixes":[]},{"propositions":[],"lastnames":["Wu"],"firstnames":["Yueyue"],"suffixes":[]},{"propositions":[],"lastnames":["Ai"],"firstnames":["Qingyao"],"suffixes":[]},{"propositions":[],"lastnames":["Liu"],"firstnames":["Yiqun"],"suffixes":[]},{"propositions":[],"lastnames":["Zhang"],"firstnames":["Min"],"suffixes":[]},{"propositions":[],"lastnames":["Ma"],"firstnames":["Shaoping"],"suffixes":[]}],"month":"March","year":"2024","note":"arXiv:2403.11152 [cs]","keywords":"Computer Science - Artificial Intelligence, Computer Science - Computation and Language","bibtex":"@misc{zhangEvaluationEthicsLLMs2024,\n\ttitle = {Evaluation {Ethics} of {LLMs} in {Legal} {Domain}},\n\turl = {http://arxiv.org/abs/2403.11152},\n\tdoi = {10.48550/arXiv.2403.11152},\n\tabstract = {In recent years, the utilization of large language models for natural language dialogue has gained momentum, leading to their widespread adoption across various domains. However, their universal competence in addressing challenges specific to specialized fields such as law remains a subject of scrutiny. The incorporation of legal ethics into the model has been overlooked by researchers. We asserts that rigorous ethic evaluation is essential to ensure the effective integration of large language models in legal domains, emphasizing the need to assess domain-specific proficiency and domain-specific ethic. To address this, we propose a novelty evaluation methodology, utilizing authentic legal cases to evaluate the fundamental language abilities, specialized legal knowledge and legal robustness of large language models (LLMs). The findings from our comprehensive evaluation contribute significantly to the academic discourse surrounding the suitability and performance of large language models in legal domains.},\n\turldate = {2024-07-28},\n\tpublisher = {arXiv},\n\tauthor = {Zhang, Ruizhe and Li, Haitao and Wu, Yueyue and Ai, Qingyao and Liu, Yiqun and Zhang, Min and Ma, Shaoping},\n\tmonth = mar,\n\tyear = {2024},\n\tnote = {arXiv:2403.11152 [cs]},\n\tkeywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language},\n}\n\n","author_short":["Zhang, R.","Li, H.","Wu, Y.","Ai, Q.","Liu, Y.","Zhang, M.","Ma, S."],"key":"zhangEvaluationEthicsLLMs2024","id":"zhangEvaluationEthicsLLMs2024","bibbaseid":"zhang-li-wu-ai-liu-zhang-ma-evaluationethicsofllmsinlegaldomain-2024","role":"author","urls":{"Paper":"http://arxiv.org/abs/2403.11152"},"keyword":["Computer Science - Artificial Intelligence","Computer Science - Computation and Language"],"metadata":{"authorlinks":{}},"downloads":2},"bibtype":"misc","biburl":"https://bibbase.org/f/vr5ooa48xeYes5KDD/ailaw.bib","dataSources":["7FkfQdR6FwGXEAZFa","QHxajSYCsDY5s5PEr"],"keywords":["computer science - artificial intelligence","computer science - computation and language"],"search_terms":["evaluation","ethics","llms","legal","domain","zhang","li","wu","ai","liu","zhang","ma"],"title":"Evaluation Ethics of LLMs in Legal Domain","year":2024,"downloads":2}