A Framework for Explanation of Machine Learning Decisions. Brinton, C. IJCAI - Workshop on Explainable AI, 2017.
Website abstract bibtex This paper presents two novel techniques to generate explanations of machine learning model results for use in advanced automation-human interaction. The first technique is " Explainable Principal Components Analysis, " which creates a framework within a multi-dimensional problem space to support the explainability of model outputs. The second technique is the " Gray-Box Decision Characterization " approach, which probes the output of the machine learning model along the dimensions of the explainable framework. These two techniques are independent of the type of machine learning algorithm. Rather, the intent of these algorithms is to be applicable generally across any type of machine learning algorithm and any application domain of machine learning. The concept and computational steps of each technique are presented in the paper, along with results of experimental implementation and analysis.
@article{
title = {A Framework for Explanation of Machine Learning Decisions},
type = {article},
year = {2017},
pages = {1-6},
websites = {http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai/2. (Brinton XAI-17) A Framework for Explanation of Machine Learning Decisions.pdf},
id = {97ba58a0-155d-3b19-bd01-d3aeb4754a18},
created = {2018-04-22T18:22:54.139Z},
file_attached = {false},
profile_id = {2ed0fe69-06a2-3e8b-9bc9-5bdb197f1120},
group_id = {e795dbfa-5576-3499-9c01-6574f19bf7aa},
last_modified = {2018-12-14T12:16:32.377Z},
read = {true},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
citation_key = {Brinton2017},
notes = {Se presentan dos técnicas para generar explicaciones para modelos de aprendizaje automático. Las dos técnicas son independientes del tipo de algoritmo utilizado en el modelo de aprendizaje automático. Las dos técnicas forman un framework:<br/>-Explainable Principal Analysis (EPCA): calcula vectores que guardan los valores de input que son los que explican por qué se ha realizado esa clasificación.<br/>-Gray Box Decision Characterization. El objetivo de esta técnica es dar una explicación para una salida concreta del sistema, no para el modelo de comportamiento entero.<br/>Se han realizado experimentos para estudiar la utilidad y aplicabilidad de las dos técnicas sobre modelos de decisión de aprendizaje automático. Conclusiones: las dos técnicas producen un framework útil para generar explicaciones en múltiples dominios.},
private_publication = {false},
abstract = {This paper presents two novel techniques to generate explanations of machine learning model results for use in advanced automation-human interaction. The first technique is " Explainable Principal Components Analysis, " which creates a framework within a multi-dimensional problem space to support the explainability of model outputs. The second technique is the " Gray-Box Decision Characterization " approach, which probes the output of the machine learning model along the dimensions of the explainable framework. These two techniques are independent of the type of machine learning algorithm. Rather, the intent of these algorithms is to be applicable generally across any type of machine learning algorithm and any application domain of machine learning. The concept and computational steps of each technique are presented in the paper, along with results of experimental implementation and analysis.},
bibtype = {article},
author = {Brinton, Chris},
journal = {IJCAI - Workshop on Explainable AI}
}
Downloads: 0
{"_id":"tGZ2nxQQXvedZ9WX3","bibbaseid":"brinton-aframeworkforexplanationofmachinelearningdecisions-2017","downloads":0,"creationDate":"2018-09-05T13:52:59.118Z","title":"A Framework for Explanation of Machine Learning Decisions","author_short":["Brinton, C."],"year":2017,"bibtype":"article","biburl":null,"bibdata":{"title":"A Framework for Explanation of Machine Learning Decisions","type":"article","year":"2017","pages":"1-6","websites":"http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai/2. (Brinton XAI-17) A Framework for Explanation of Machine Learning Decisions.pdf","id":"97ba58a0-155d-3b19-bd01-d3aeb4754a18","created":"2018-04-22T18:22:54.139Z","file_attached":false,"profile_id":"2ed0fe69-06a2-3e8b-9bc9-5bdb197f1120","group_id":"e795dbfa-5576-3499-9c01-6574f19bf7aa","last_modified":"2018-12-14T12:16:32.377Z","read":"true","starred":false,"authored":false,"confirmed":"true","hidden":false,"citation_key":"Brinton2017","notes":"Se presentan dos técnicas para generar explicaciones para modelos de aprendizaje automático. Las dos técnicas son independientes del tipo de algoritmo utilizado en el modelo de aprendizaje automático. Las dos técnicas forman un framework:<br/>-Explainable Principal Analysis (EPCA): calcula vectores que guardan los valores de input que son los que explican por qué se ha realizado esa clasificación.<br/>-Gray Box Decision Characterization. El objetivo de esta técnica es dar una explicación para una salida concreta del sistema, no para el modelo de comportamiento entero.<br/>Se han realizado experimentos para estudiar la utilidad y aplicabilidad de las dos técnicas sobre modelos de decisión de aprendizaje automático. Conclusiones: las dos técnicas producen un framework útil para generar explicaciones en múltiples dominios.","private_publication":false,"abstract":"This paper presents two novel techniques to generate explanations of machine learning model results for use in advanced automation-human interaction. The first technique is \" Explainable Principal Components Analysis, \" which creates a framework within a multi-dimensional problem space to support the explainability of model outputs. The second technique is the \" Gray-Box Decision Characterization \" approach, which probes the output of the machine learning model along the dimensions of the explainable framework. These two techniques are independent of the type of machine learning algorithm. Rather, the intent of these algorithms is to be applicable generally across any type of machine learning algorithm and any application domain of machine learning. The concept and computational steps of each technique are presented in the paper, along with results of experimental implementation and analysis.","bibtype":"article","author":"Brinton, Chris","journal":"IJCAI - Workshop on Explainable AI","bibtex":"@article{\n title = {A Framework for Explanation of Machine Learning Decisions},\n type = {article},\n year = {2017},\n pages = {1-6},\n websites = {http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai/2. (Brinton XAI-17) A Framework for Explanation of Machine Learning Decisions.pdf},\n id = {97ba58a0-155d-3b19-bd01-d3aeb4754a18},\n created = {2018-04-22T18:22:54.139Z},\n file_attached = {false},\n profile_id = {2ed0fe69-06a2-3e8b-9bc9-5bdb197f1120},\n group_id = {e795dbfa-5576-3499-9c01-6574f19bf7aa},\n last_modified = {2018-12-14T12:16:32.377Z},\n read = {true},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Brinton2017},\n notes = {Se presentan dos técnicas para generar explicaciones para modelos de aprendizaje automático. Las dos técnicas son independientes del tipo de algoritmo utilizado en el modelo de aprendizaje automático. Las dos técnicas forman un framework:<br/>-Explainable Principal Analysis (EPCA): calcula vectores que guardan los valores de input que son los que explican por qué se ha realizado esa clasificación.<br/>-Gray Box Decision Characterization. El objetivo de esta técnica es dar una explicación para una salida concreta del sistema, no para el modelo de comportamiento entero.<br/>Se han realizado experimentos para estudiar la utilidad y aplicabilidad de las dos técnicas sobre modelos de decisión de aprendizaje automático. Conclusiones: las dos técnicas producen un framework útil para generar explicaciones en múltiples dominios.},\n private_publication = {false},\n abstract = {This paper presents two novel techniques to generate explanations of machine learning model results for use in advanced automation-human interaction. The first technique is \" Explainable Principal Components Analysis, \" which creates a framework within a multi-dimensional problem space to support the explainability of model outputs. The second technique is the \" Gray-Box Decision Characterization \" approach, which probes the output of the machine learning model along the dimensions of the explainable framework. These two techniques are independent of the type of machine learning algorithm. Rather, the intent of these algorithms is to be applicable generally across any type of machine learning algorithm and any application domain of machine learning. The concept and computational steps of each technique are presented in the paper, along with results of experimental implementation and analysis.},\n bibtype = {article},\n author = {Brinton, Chris},\n journal = {IJCAI - Workshop on Explainable AI}\n}","author_short":["Brinton, C."],"urls":{"Website":"http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai/2. (Brinton XAI-17) A Framework for Explanation of Machine Learning Decisions.pdf"},"bibbaseid":"brinton-aframeworkforexplanationofmachinelearningdecisions-2017","role":"author","downloads":0},"search_terms":["framework","explanation","machine","learning","decisions","brinton"],"keywords":[],"authorIDs":[]}