Data Science with LLMs and Interpretable Models. Bordt, S., Lengerich, B., Nori, H., & Caruana, R. AAAI Explainable AI for Science, 2023.
abstract   bibtex   
Recent years have seen important advances in the building of interpretable models, machine learning models that are designed to be easily understood by humans. In this work, we show that large language models (LLMs) are remarkably good at working with interpretable models, too. In particular, we show that LLMs can describe, interpret, and debug Generalized Additive Models (GAMs). Combining the flexibility of LLMs with the breadth of statistical patterns accurately described by GAMs enables dataset summarization, question answering, and model critique. LLMs can also improve the interaction between domain experts and interpretable models, and generate hypotheses about the underlying phenomenon. We release TalkToEBM as an open-source LLM-GAM interface.
@article{bordt2024data,
    author = {Bordt, Sebastian and Lengerich, Ben and Nori, Harsha and Caruana, Rich},
    title = {Data Science with LLMs and Interpretable Models},
    journal = {AAAI Explainable AI for Science},
    year = {2023},
    informal_venue = {AAAI XAI4Sci},
    abstract = {Recent years have seen important advances in the building of interpretable models, machine learning models that are designed to be easily understood by humans. In this work, we show that large language models (LLMs) are remarkably good at working with interpretable models, too. In particular, we show that LLMs can describe, interpret, and debug Generalized Additive Models (GAMs). Combining the flexibility of LLMs with the breadth of statistical patterns accurately described by GAMs enables dataset summarization, question answering, and model critique. LLMs can also improve the interaction between domain experts and interpretable models, and generate hypotheses about the underlying phenomenon. We release TalkToEBM as an open-source LLM-GAM interface.},
keywords = {Interpretable, LLMs}
}

Downloads: 0