ChartMuseum: Testing Visual Reasoning Capabilities of Large Vision-Language Models. Tang, L., Kim, G., Zhao, X., Durrett, G., Lake, T., Ding, W., Yin, F., Singhal, P., Wadhwa, M., Liu, Z. L., Sprague, Z., Namuduri, R., Hu, B., Rodriguez, J. D., & Peng, P. May, 2025. arXiv:2505.13444 [cs]
ChartMuseum: Testing Visual Reasoning Capabilities of Large Vision-Language Models [link]Paper  doi  abstract   bibtex   
Chart understanding presents a unique challenge for large vision-language models (LVLMs), as it requires the integration of sophisticated textual and visual reasoning capabilities. However, current LVLMs exhibit a notable imbalance between these skills, falling short on visual reasoning that is difficult to perform in text. We conduct a case study using a synthetic dataset solvable only through visual reasoning and show that model performance degrades significantly with increasing visual complexity, while human performance remains robust. We then introduce CHARTMUSEUM, a new Chart Question Answering (QA) benchmark containing 1,162 expert-annotated questions spanning multiple reasoning types, curated from realworld charts across 184 sources, specifically built to evaluate complex visual and textual reasoning. Unlike prior chart understanding benchmarks—where frontier models perform similarly and near saturation—our benchmark exposes a substantial gap between model and human performance, while effectively differentiating model capabilities: although humans achieve 93% accuracy, the best-performing model Gemini-2.5-Pro attains only 63.0%, and the leading open-source LVLM Qwen2.5-VL-72B-Instruct achieves only 38.5%. Moreover, on questions requiring primarily visual reasoning, all models experience a 35%-55% performance drop from text-reasoning-heavy question performance. Lastly, our qualitative error analysis reveals specific categories of visual reasoning that are challenging for current LVLMs.
@misc{tang_chartmuseum_2025,
	title = {{ChartMuseum}: {Testing} {Visual} {Reasoning} {Capabilities} of {Large} {Vision}-{Language} {Models}},
	shorttitle = {{ChartMuseum}},
	url = {http://arxiv.org/abs/2505.13444},
	doi = {10.48550/arXiv.2505.13444},
	abstract = {Chart understanding presents a unique challenge for large vision-language models (LVLMs), as it requires the integration of sophisticated textual and visual reasoning capabilities. However, current LVLMs exhibit a notable imbalance between these skills, falling short on visual reasoning that is difficult to perform in text. We conduct a case study using a synthetic dataset solvable only through visual reasoning and show that model performance degrades significantly with increasing visual complexity, while human performance remains robust. We then introduce CHARTMUSEUM, a new Chart Question Answering (QA) benchmark containing 1,162 expert-annotated questions spanning multiple reasoning types, curated from realworld charts across 184 sources, specifically built to evaluate complex visual and textual reasoning. Unlike prior chart understanding benchmarks—where frontier models perform similarly and near saturation—our benchmark exposes a substantial gap between model and human performance, while effectively differentiating model capabilities: although humans achieve 93\% accuracy, the best-performing model Gemini-2.5-Pro attains only 63.0\%, and the leading open-source LVLM Qwen2.5-VL-72B-Instruct achieves only 38.5\%. Moreover, on questions requiring primarily visual reasoning, all models experience a 35\%-55\% performance drop from text-reasoning-heavy question performance. Lastly, our qualitative error analysis reveals specific categories of visual reasoning that are challenging for current LVLMs.},
	language = {en},
	urldate = {2025-05-28},
	publisher = {arXiv},
	author = {Tang, Liyan and Kim, Grace and Zhao, Xinyu and Durrett, Greg and Lake, Thom and Ding, Wenxuan and Yin, Fangcong and Singhal, Prasann and Wadhwa, Manya and Liu, Zeyu Leo and Sprague, Zayne and Namuduri, Ramya and Hu, Bodun and Rodriguez, Juan Diego and Peng, Puyuan},
	month = may,
	year = {2025},
	note = {arXiv:2505.13444 [cs]},
	keywords = {Computer Science - Computation and Language, Computer Science - Computer Vision and Pattern Recognition},
}

Downloads: 0