Model Cards for Model Reporting. Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. Proceedings of the Conference on Fairness, Accountability, and Transparency, January, 2019. 🏷️ /unread、Computer Science - Artificial Intelligence、Computer Science - Machine LearningPaper doi abstract bibtex Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related AI technology, increasing transparency into how well AI technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation. 【摘要翻译】在执法、医疗、教育和就业等领域,训练有素的机器学习模型越来越多地被用于执行影响较大的任务。为了明确机器学习模型的预期用例,并尽量减少其在不太适合的环境中的使用,我们建议发布的模型应附有详细说明其性能特征的文档。在本文中,我们提出了一个被称为模型卡的框架,以鼓励这种透明的模型报告。模型卡是训练有素的机器学习模型所附带的简短文档,提供了在各种条件下的基准评估,如不同文化、人口或表型群体(如种族、地理位置、性别、菲茨帕特里克皮肤类型)和交叉群体(如年龄和种族,或性别和菲茨帕特里克皮肤类型)之间的基准评估,这些都与预期的应用领域相关。模型卡还披露了模型的使用环境、性能评估程序的细节以及其他相关信息。虽然我们主要关注的是计算机视觉和自然语言处理应用领域中以人为中心的机器学习模型,但这一框架可用于记录任何经过训练的机器学习模型。为了巩固这一概念,我们提供了两个监督模型的卡片:一个经过训练可检测图像中的笑脸,另一个经过训练可检测文本中的有毒评论。我们提出的模型卡片是朝着机器学习和相关人工智能技术负责任的民主化迈出的一步,提高了人工智能技术运作的透明度。我们希望这项工作能鼓励那些发布训练有素的机器学习模型的公司在发布模型时附上类似的详细评估数据和其他相关文档。
@article{mitchell2019,
title = {Model {Cards} for {Model} {Reporting}},
shorttitle = {模型报告模型卡},
url = {http://arxiv.org/abs/1810.03993},
doi = {10.1145/3287560.3287596},
abstract = {Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related AI technology, increasing transparency into how well AI technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.
【摘要翻译】在执法、医疗、教育和就业等领域,训练有素的机器学习模型越来越多地被用于执行影响较大的任务。为了明确机器学习模型的预期用例,并尽量减少其在不太适合的环境中的使用,我们建议发布的模型应附有详细说明其性能特征的文档。在本文中,我们提出了一个被称为模型卡的框架,以鼓励这种透明的模型报告。模型卡是训练有素的机器学习模型所附带的简短文档,提供了在各种条件下的基准评估,如不同文化、人口或表型群体(如种族、地理位置、性别、菲茨帕特里克皮肤类型)和交叉群体(如年龄和种族,或性别和菲茨帕特里克皮肤类型)之间的基准评估,这些都与预期的应用领域相关。模型卡还披露了模型的使用环境、性能评估程序的细节以及其他相关信息。虽然我们主要关注的是计算机视觉和自然语言处理应用领域中以人为中心的机器学习模型,但这一框架可用于记录任何经过训练的机器学习模型。为了巩固这一概念,我们提供了两个监督模型的卡片:一个经过训练可检测图像中的笑脸,另一个经过训练可检测文本中的有毒评论。我们提出的模型卡片是朝着机器学习和相关人工智能技术负责任的民主化迈出的一步,提高了人工智能技术运作的透明度。我们希望这项工作能鼓励那些发布训练有素的机器学习模型的公司在发布模型时附上类似的详细评估数据和其他相关文档。},
language = {en},
urldate = {2022-01-24},
journal = {Proceedings of the Conference on Fairness, Accountability, and Transparency},
author = {Mitchell, Margaret and Wu, Simone and Zaldivar, Andrew and Barnes, Parker and Vasserman, Lucy and Hutchinson, Ben and Spitzer, Elena and Raji, Inioluwa Deborah and Gebru, Timnit},
month = jan,
year = {2019},
note = {🏷️ /unread、Computer Science - Artificial Intelligence、Computer Science - Machine Learning},
keywords = {/unread, Computer Science - Artificial Intelligence, Computer Science - Machine Learning},
pages = {220--229},
}
Downloads: 0
{"_id":"Ef9ygemSDhZwRNo8n","bibbaseid":"mitchell-wu-zaldivar-barnes-vasserman-hutchinson-spitzer-raji-etal-modelcardsformodelreporting-2019","author_short":["Mitchell, M.","Wu, S.","Zaldivar, A.","Barnes, P.","Vasserman, L.","Hutchinson, B.","Spitzer, E.","Raji, I. D.","Gebru, T."],"bibdata":{"bibtype":"article","type":"article","title":"Model Cards for Model Reporting","shorttitle":"模型报告模型卡","url":"http://arxiv.org/abs/1810.03993","doi":"10.1145/3287560.3287596","abstract":"Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related AI technology, increasing transparency into how well AI technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation. 【摘要翻译】在执法、医疗、教育和就业等领域,训练有素的机器学习模型越来越多地被用于执行影响较大的任务。为了明确机器学习模型的预期用例,并尽量减少其在不太适合的环境中的使用,我们建议发布的模型应附有详细说明其性能特征的文档。在本文中,我们提出了一个被称为模型卡的框架,以鼓励这种透明的模型报告。模型卡是训练有素的机器学习模型所附带的简短文档,提供了在各种条件下的基准评估,如不同文化、人口或表型群体(如种族、地理位置、性别、菲茨帕特里克皮肤类型)和交叉群体(如年龄和种族,或性别和菲茨帕特里克皮肤类型)之间的基准评估,这些都与预期的应用领域相关。模型卡还披露了模型的使用环境、性能评估程序的细节以及其他相关信息。虽然我们主要关注的是计算机视觉和自然语言处理应用领域中以人为中心的机器学习模型,但这一框架可用于记录任何经过训练的机器学习模型。为了巩固这一概念,我们提供了两个监督模型的卡片:一个经过训练可检测图像中的笑脸,另一个经过训练可检测文本中的有毒评论。我们提出的模型卡片是朝着机器学习和相关人工智能技术负责任的民主化迈出的一步,提高了人工智能技术运作的透明度。我们希望这项工作能鼓励那些发布训练有素的机器学习模型的公司在发布模型时附上类似的详细评估数据和其他相关文档。","language":"en","urldate":"2022-01-24","journal":"Proceedings of the Conference on Fairness, Accountability, and Transparency","author":[{"propositions":[],"lastnames":["Mitchell"],"firstnames":["Margaret"],"suffixes":[]},{"propositions":[],"lastnames":["Wu"],"firstnames":["Simone"],"suffixes":[]},{"propositions":[],"lastnames":["Zaldivar"],"firstnames":["Andrew"],"suffixes":[]},{"propositions":[],"lastnames":["Barnes"],"firstnames":["Parker"],"suffixes":[]},{"propositions":[],"lastnames":["Vasserman"],"firstnames":["Lucy"],"suffixes":[]},{"propositions":[],"lastnames":["Hutchinson"],"firstnames":["Ben"],"suffixes":[]},{"propositions":[],"lastnames":["Spitzer"],"firstnames":["Elena"],"suffixes":[]},{"propositions":[],"lastnames":["Raji"],"firstnames":["Inioluwa","Deborah"],"suffixes":[]},{"propositions":[],"lastnames":["Gebru"],"firstnames":["Timnit"],"suffixes":[]}],"month":"January","year":"2019","note":"🏷️ /unread、Computer Science - Artificial Intelligence、Computer Science - Machine Learning","keywords":"/unread, Computer Science - Artificial Intelligence, Computer Science - Machine Learning","pages":"220–229","bibtex":"@article{mitchell2019,\n\ttitle = {Model {Cards} for {Model} {Reporting}},\n\tshorttitle = {模型报告模型卡},\n\turl = {http://arxiv.org/abs/1810.03993},\n\tdoi = {10.1145/3287560.3287596},\n\tabstract = {Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related AI technology, increasing transparency into how well AI technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.\n\n【摘要翻译】在执法、医疗、教育和就业等领域,训练有素的机器学习模型越来越多地被用于执行影响较大的任务。为了明确机器学习模型的预期用例,并尽量减少其在不太适合的环境中的使用,我们建议发布的模型应附有详细说明其性能特征的文档。在本文中,我们提出了一个被称为模型卡的框架,以鼓励这种透明的模型报告。模型卡是训练有素的机器学习模型所附带的简短文档,提供了在各种条件下的基准评估,如不同文化、人口或表型群体(如种族、地理位置、性别、菲茨帕特里克皮肤类型)和交叉群体(如年龄和种族,或性别和菲茨帕特里克皮肤类型)之间的基准评估,这些都与预期的应用领域相关。模型卡还披露了模型的使用环境、性能评估程序的细节以及其他相关信息。虽然我们主要关注的是计算机视觉和自然语言处理应用领域中以人为中心的机器学习模型,但这一框架可用于记录任何经过训练的机器学习模型。为了巩固这一概念,我们提供了两个监督模型的卡片:一个经过训练可检测图像中的笑脸,另一个经过训练可检测文本中的有毒评论。我们提出的模型卡片是朝着机器学习和相关人工智能技术负责任的民主化迈出的一步,提高了人工智能技术运作的透明度。我们希望这项工作能鼓励那些发布训练有素的机器学习模型的公司在发布模型时附上类似的详细评估数据和其他相关文档。},\n\tlanguage = {en},\n\turldate = {2022-01-24},\n\tjournal = {Proceedings of the Conference on Fairness, Accountability, and Transparency},\n\tauthor = {Mitchell, Margaret and Wu, Simone and Zaldivar, Andrew and Barnes, Parker and Vasserman, Lucy and Hutchinson, Ben and Spitzer, Elena and Raji, Inioluwa Deborah and Gebru, Timnit},\n\tmonth = jan,\n\tyear = {2019},\n\tnote = {🏷️ /unread、Computer Science - Artificial Intelligence、Computer Science - Machine Learning},\n\tkeywords = {/unread, Computer Science - Artificial Intelligence, Computer Science - Machine Learning},\n\tpages = {220--229},\n}\n\n","author_short":["Mitchell, M.","Wu, S.","Zaldivar, A.","Barnes, P.","Vasserman, L.","Hutchinson, B.","Spitzer, E.","Raji, I. D.","Gebru, T."],"key":"mitchell2019","id":"mitchell2019","bibbaseid":"mitchell-wu-zaldivar-barnes-vasserman-hutchinson-spitzer-raji-etal-modelcardsformodelreporting-2019","role":"author","urls":{"Paper":"http://arxiv.org/abs/1810.03993"},"keyword":["/unread","Computer Science - Artificial Intelligence","Computer Science - Machine Learning"],"metadata":{"authorlinks":{}}},"bibtype":"article","biburl":"https://api.zotero.org/groups/2386895/collections/7PPRTB2H/items?format=bibtex&limit=100","dataSources":["u8q5uny4m5jJL9RcX","JFDnASMkoQCjjGL8E","h7kKWXpJh2iaX92T5"],"keywords":["/unread","computer science - artificial intelligence","computer science - machine learning"],"search_terms":["model","cards","model","reporting","mitchell","wu","zaldivar","barnes","vasserman","hutchinson","spitzer","raji","gebru"],"title":"Model Cards for Model Reporting","year":2019}