Chain-of-thought as explanations: decomposability, compressibility, comprehensibility of language in LLM. March 2025. Presented at a conference on LLM and speech acts at the University of Bucharestabstract bibtex In this paper, I relate LLM as models to models used in science to represent a target system. I take models as predictive and explanatory. I clarify what it means to be explanatory in cases of models. We focus more on mechanistic models used in biology and social science. I propose to open the LLM qua models to interpretations. What is represented by a LLM model? Models can represent multiple targets. But if LLM are models of data (clarify the data of models) or they can modeling a human epistemic agent operating in an complex environment. This is the second major difference relevant to the question of speech acts in LLM. If LLM are modeling data, then what the explananda is the patterns in data. Predictable patterns in data that can be inaccessible to a human The third possibility is to take LLM not as models but as epistemic agents in themselves. In this case, the speech act question has radically different answers.
@unpublished{noauthor_chain--thought_2025,
title = {Chain-of-thought as explanations: decomposability, compressibility, comprehensibility of language in {LLM}},
abstract = {In this paper, I relate LLM as models to models used in science to represent a target system. I take models as predictive and explanatory. I clarify what it means to be explanatory in cases of models. We focus more on mechanistic models used in biology and social science. I propose to open the LLM qua models to interpretations.
What is represented by a LLM model? Models can represent multiple targets. But if LLM are models of data (clarify the data of models) or they can modeling a human epistemic agent operating in an complex environment.
This is the second major difference relevant to the question of speech acts in LLM. If LLM are modeling data, then what the explananda is the patterns in data. Predictable patterns in data that can be inaccessible to a human
The third possibility is to take LLM not as models but as epistemic agents in themselves. In this case, the speech act question has radically different answers.},
language = {4. Applied ethics: machine ethics},
month = mar,
year = {2025},
note = {Presented at a conference on LLM and speech acts at the University of Bucharest},
}
Downloads: 0
{"_id":"pBsguGF5vp6rePHJk","bibbaseid":"anonymous-chainofthoughtasexplanationsdecomposabilitycompressibilitycomprehensibilityoflanguageinllm-2025","bibdata":{"bibtype":"unpublished","type":"unpublished","title":"Chain-of-thought as explanations: decomposability, compressibility, comprehensibility of language in LLM","abstract":"In this paper, I relate LLM as models to models used in science to represent a target system. I take models as predictive and explanatory. I clarify what it means to be explanatory in cases of models. We focus more on mechanistic models used in biology and social science. I propose to open the LLM qua models to interpretations. What is represented by a LLM model? Models can represent multiple targets. But if LLM are models of data (clarify the data of models) or they can modeling a human epistemic agent operating in an complex environment. This is the second major difference relevant to the question of speech acts in LLM. If LLM are modeling data, then what the explananda is the patterns in data. Predictable patterns in data that can be inaccessible to a human The third possibility is to take LLM not as models but as epistemic agents in themselves. In this case, the speech act question has radically different answers.","language":"4. Applied ethics: machine ethics","month":"March","year":"2025","note":"Presented at a conference on LLM and speech acts at the University of Bucharest","bibtex":"@unpublished{noauthor_chain--thought_2025,\n\ttitle = {Chain-of-thought as explanations: decomposability, compressibility, comprehensibility of language in {LLM}},\n\tabstract = {In this paper, I relate LLM as models to models used in science to represent a target system. I take models as predictive and explanatory. I clarify what it means to be explanatory in cases of models. We focus more on mechanistic models used in biology and social science. I propose to open the LLM qua models to interpretations. \nWhat is represented by a LLM model? Models can represent multiple targets. But if LLM are models of data (clarify the data of models) or they can modeling a human epistemic agent operating in an complex environment.\nThis is the second major difference relevant to the question of speech acts in LLM. If LLM are modeling data, then what the explananda is the patterns in data. Predictable patterns in data that can be inaccessible to a human \nThe third possibility is to take LLM not as models but as epistemic agents in themselves. In this case, the speech act question has radically different answers.},\n\tlanguage = {4. Applied ethics: machine ethics},\n\tmonth = mar,\n\tyear = {2025},\n\tnote = {Presented at a conference on LLM and speech acts at the University of Bucharest},\n}\n\n\n\n","key":"noauthor_chain--thought_2025","id":"noauthor_chain--thought_2025","bibbaseid":"anonymous-chainofthoughtasexplanationsdecomposabilitycompressibilitycomprehensibilityoflanguageinllm-2025","role":"","urls":{},"metadata":{"authorlinks":{}}},"bibtype":"unpublished","biburl":"https://bibbase.org/zotero-mypublications/imuntean@gmail.com","dataSources":["XvHLTpD3ZxMPM6Y93","wjn8cK5LeFh95gDh5","fM8pc2ftKD7XakfFs"],"keywords":[],"search_terms":["chain","thought","explanations","decomposability","compressibility","comprehensibility","language","llm"],"title":"Chain-of-thought as explanations: decomposability, compressibility, comprehensibility of language in LLM","year":2025}