LLMs meet Bloom`s Taxonomy: A Cognitive View on Large Language Model Evaluations. Huber, T. & Niklaus, C. In Rambow, O., Wanner, L., Apidianaki, M., Al-Khalifa, H., Eugenio, B. D., & Schockaert, S., editors, Proceedings of the 31st International Conference on Computational Linguistics, pages 5211–5246, Abu Dhabi, UAE, January, 2025. Association for Computational Linguistics.
LLMs meet Bloom`s Taxonomy: A Cognitive View on Large Language Model Evaluations [link]Paper  abstract   bibtex   
Current evaluation approaches for Large Language Models (LLMs) lack a structured approach that reflects the underlying cognitive abilities required for solving the tasks. This hinders a thorough understanding of the current level of LLM capabilities. For instance, it is widely accepted that LLMs perform well in terms of grammar, but it is unclear in what specific cognitive areas they excel or struggle in. This paper introduces a novel perspective on the evaluation of LLMs that leverages a hierarchical classification of tasks. Specifically, we explore the most widely used benchmarks for LLMs to systematically identify how well these existing evaluation methods cover the levels of Bloom`s Taxonomy, a hierarchical framework for categorizing cognitive skills. This comprehensive analysis allows us to identify strengths and weaknesses in current LLM assessment strategies in terms of cognitive abilities and suggest directions for both future benchmark development as well as highlight potential avenues for LLM research. Our findings reveal that LLMs generally perform better on the lower end of Bloom`s Taxonomy. Additionally, we find that there are significant gaps in the coverage of cognitive skills in the most commonly used benchmarks.
@inproceedings{huber_llms_2025,
	address = {Abu Dhabi, UAE},
	title = {{LLMs} meet {Bloom}`s {Taxonomy}: {A} {Cognitive} {View} on {Large} {Language} {Model} {Evaluations}},
	shorttitle = {{LLMs} meet {Bloom}`s {Taxonomy}},
	url = {https://aclanthology.org/2025.coling-main.350/},
	abstract = {Current evaluation approaches for Large Language Models (LLMs) lack a structured approach that reflects the underlying cognitive abilities required for solving the tasks. This hinders a thorough understanding of the current level of LLM capabilities. For instance, it is widely accepted that LLMs perform well in terms of grammar, but it is unclear in what specific cognitive areas they excel or struggle in. This paper introduces a novel perspective on the evaluation of LLMs that leverages a hierarchical classification of tasks. Specifically, we explore the most widely used benchmarks for LLMs to systematically identify how well these existing evaluation methods cover the levels of Bloom`s Taxonomy, a hierarchical framework for categorizing cognitive skills. This comprehensive analysis allows us to identify strengths and weaknesses in current LLM assessment strategies in terms of cognitive abilities and suggest directions for both future benchmark development as well as highlight potential avenues for LLM research. Our findings reveal that LLMs generally perform better on the lower end of Bloom`s Taxonomy. Additionally, we find that there are significant gaps in the coverage of cognitive skills in the most commonly used benchmarks.},
	urldate = {2025-01-28},
	booktitle = {Proceedings of the 31st {International} {Conference} on {Computational} {Linguistics}},
	publisher = {Association for Computational Linguistics},
	author = {Huber, Thomas and Niklaus, Christina},
	editor = {Rambow, Owen and Wanner, Leo and Apidianaki, Marianna and Al-Khalifa, Hend and Eugenio, Barbara Di and Schockaert, Steven},
	month = jan,
	year = {2025},
	keywords = {coling-25},
	pages = {5211--5246},
}

Downloads: 0