Beyond Explainable AI (XAI): An Overdue Paradigm Shift and Post-XAI Research Directions. Afroogh, S., Ahmed, S. I., Ahrweiler, P., Alvarez-Melis, D., Arief, M. M., Barakova, E., Bargagli-Stoffi, F. J., Biyik, E., Chen, H., Chen, X. '., Clements, R. A., Crockett, K., Dhurandhar, A., Dogan, F. I., Dollinger, M., Eslami, M., Faisal, A. A., Farahi, A., Pradier, M. F., Gabriel, S., Garcia-Olano, D., Ghassemi, M., Ghosh, S., Gunes, H., Hajiramezanali, E., Haufe, S., Huang, B., Hwang, A., Islam, M. T., Jiao, J., Karimi, A., Kazeminasab, S., Kuzminykh, A., Cava, W. L., Lim, B. Y., Liu, X., Mofrad, M. R. K., Parrish, A., Perez-Ortiz, M., Raj, S., Swayamdipta, S., Talebi, S., Varshney, K. R., Vorvoreanu, M., Weng, L., Xiang, A., Xu, Y., Zhao, D., & Zhao, J. March, 2026. arXiv:2602.24176 [cs]
Beyond Explainable AI (XAI): An Overdue Paradigm Shift and Post-XAI Research Directions [link]Paper  doi  abstract   bibtex   
This study provides a cross-disciplinary examination of Explainable Artificial Intelligence (XAI) approaches—focusing on deep neural networks (DNNs) and large language models (LLMs) —and identifies empirical and conceptual limitations in current XAI. We discuss critical symptoms that stem from deeper root causes (i.e., two paradoxes, two conceptual confusions, and five false assumptions). These fundamental problems within the current XAI research field reveal three insights: experimentally, XAI exhibits significant flaws; conceptually, it is paradoxical; and pragmatically, further attempts to reform the paradoxical XAI might exacerbate its confusion—demanding fundamental shifts and new research directions. To move beyond XAI’s limitations, we propose a four-pronged synthesized paradigm shift toward reliable and certified AI development. These four components include: verification-focused Interactive AI (IAI) to establish scientific community protocols for certifying AI system performance rather than attempting post-hoc explanations, AI Epistemology for rigorous scientific foundations, User-Sensible AI to create context-aware systems tailored to specific user communities, and Model-Centered Interpretability for faithful technical analysis—together offering comprehensive post-XAI research directions.
@misc{afroogh_beyond_2026,
	title = {Beyond {Explainable} {AI} ({XAI}): {An} {Overdue} {Paradigm} {Shift} and {Post}-{XAI} {Research} {Directions}},
	shorttitle = {Beyond {Explainable} {AI} ({XAI})},
	url = {http://arxiv.org/abs/2602.24176},
	doi = {10.48550/arXiv.2602.24176},
	abstract = {This study provides a cross-disciplinary examination of Explainable Artificial Intelligence (XAI) approaches—focusing on deep neural networks (DNNs) and large language models (LLMs) —and identifies empirical and conceptual limitations in current XAI. We discuss critical symptoms that stem from deeper root causes (i.e., two paradoxes, two conceptual confusions, and five false assumptions). These fundamental problems within the current XAI research field reveal three insights: experimentally, XAI exhibits significant flaws; conceptually, it is paradoxical; and pragmatically, further attempts to reform the paradoxical XAI might exacerbate its confusion—demanding fundamental shifts and new research directions. To move beyond XAI’s limitations, we propose a four-pronged synthesized paradigm shift toward reliable and certified AI development. These four components include: verification-focused Interactive AI (IAI) to establish scientific community protocols for certifying AI system performance rather than attempting post-hoc explanations, AI Epistemology for rigorous scientific foundations, User-Sensible AI to create context-aware systems tailored to specific user communities, and Model-Centered Interpretability for faithful technical analysis—together offering comprehensive post-XAI research directions.},
	language = {en},
	urldate = {2026-03-19},
	publisher = {arXiv},
	author = {Afroogh, Saleh and Ahmed, Seyd Ishtiaque and Ahrweiler, Petra and Alvarez-Melis, David and Arief, Mansur Maturidi and Barakova, Emilia and Bargagli-Stoffi, Falco J. and Biyik, Erdem and Chen, Hanjie and Chen, Xiang 'Anthony' and Clements, Robert Alan and Crockett, Keeley and Dhurandhar, Amit and Dogan, Fethiye Irmak and Dollinger, Mollie and Eslami, Motahhare and Faisal, Aldo A. and Farahi, Arya and Pradier, Melanie F. and Gabriel, Saadia and Garcia-Olano, Diego and Ghassemi, Marzyeh and Ghosh, Shaona and Gunes, Hatice and Hajiramezanali, Ehsan and Haufe, Stefan and Huang, Biwei and Hwang, Angel and Islam, Md Tauhidul and Jiao, Junfeng and Karimi, Amir-Hossein and Kazeminasab, Saber and Kuzminykh, Anastasia and Cava, William La and Lim, Brian Y. and Liu, Xiaofeng and Mofrad, Mohammad R. K. and Parrish, Alicia and Perez-Ortiz, Maria and Raj, Shriti and Swayamdipta, Swabha and Talebi, Salmonn and Varshney, Kush R. and Vorvoreanu, Mihaela and Weng, Lily and Xiang, Alice and Xu, Yiming and Zhao, Ding and Zhao, Jieyu},
	month = mar,
	year = {2026},
	note = {arXiv:2602.24176 [cs]},
	keywords = {Computer Science - Computers and Society},
}

Downloads: 0