LalaEval: A Holistic Human Evaluation Framework for Domain-Specific Large Language Models. Sun, C., Lin, K., Wang, S., Wu, H., Fu, C., & Wang, Z. August, 2024. arXiv:2408.13338 [cs]
LalaEval: A Holistic Human Evaluation Framework for Domain-Specific Large Language Models [link]Paper  doi  abstract   bibtex   
This paper introduces LalaEval, a holistic framework designed for the human evaluation of domain-specific large language models (LLMs). LalaEval proposes a comprehensive suite of end-to-end protocols that cover five main components including domain specification, criteria establishment, benchmark dataset creation, construction of evaluation rubrics, and thorough analysis and interpretation of evaluation outcomes. This initiative aims to fill a crucial research gap by providing a systematic methodology for conducting standardized human evaluations within specific domains, a practice that, despite its widespread application, lacks substantial coverage in the literature and human evaluation are often criticized to be less reliable due to subjective factors, so standardized procedures adapted to the nuanced requirements of specific domains or even individual organizations are in great need. Furthermore, the paper demonstrates the framework's application within the logistics industry, presenting domain-specific evaluation benchmarks, datasets, and a comparative analysis of LLMs for the logistics domain use, highlighting the framework's capacity to elucidate performance differences and guide model selection and development for domain-specific LLMs. Through real-world deployment, the paper underscores the framework's effectiveness in advancing the field of domain-specific LLM evaluation, thereby contributing significantly to the ongoing discussion on LLMs' practical utility and performance in domain-specific applications.
@misc{sun_lalaeval_2024,
	title = {{LalaEval}: {A} {Holistic} {Human} {Evaluation} {Framework} for {Domain}-{Specific} {Large} {Language} {Models}},
	shorttitle = {{LalaEval}},
	url = {http://arxiv.org/abs/2408.13338},
	doi = {10.48550/arXiv.2408.13338},
	abstract = {This paper introduces LalaEval, a holistic framework designed for the human evaluation of domain-specific large language models (LLMs). LalaEval proposes a comprehensive suite of end-to-end protocols that cover five main components including domain specification, criteria establishment, benchmark dataset creation, construction of evaluation rubrics, and thorough analysis and interpretation of evaluation outcomes. This initiative aims to fill a crucial research gap by providing a systematic methodology for conducting standardized human evaluations within specific domains, a practice that, despite its widespread application, lacks substantial coverage in the literature and human evaluation are often criticized to be less reliable due to subjective factors, so standardized procedures adapted to the nuanced requirements of specific domains or even individual organizations are in great need. Furthermore, the paper demonstrates the framework's application within the logistics industry, presenting domain-specific evaluation benchmarks, datasets, and a comparative analysis of LLMs for the logistics domain use, highlighting the framework's capacity to elucidate performance differences and guide model selection and development for domain-specific LLMs. Through real-world deployment, the paper underscores the framework's effectiveness in advancing the field of domain-specific LLM evaluation, thereby contributing significantly to the ongoing discussion on LLMs' practical utility and performance in domain-specific applications.},
	urldate = {2024-09-03},
	publisher = {arXiv},
	author = {Sun, Chongyan and Lin, Ken and Wang, Shiwei and Wu, Hulong and Fu, Chengfei and Wang, Zhen},
	month = aug,
	year = {2024},
	note = {arXiv:2408.13338 [cs]},
	keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Human-Computer Interaction},
}

Downloads: 0