A call for more explainable AI in law enforcement. Matulionyte, R. & Hanif, A. November, 2021. 1 citations (Crossref) [2023-07-26]
Paper doi abstract bibtex The use of AI in law enforcement raises several significant ethical and legal concerns. One of them is AI explainability principle, which is mentioned in numerous national and international AI ethical guidelines. This paper firstly analyses what AI explainability principle could mean with relation toAI use in law enforcement, namely, to whom, why and how the explanation about the functioning of AI and its outcomes needs to be provided. Secondly, it explores some legal obstacles in ensuring the desired explainability of AI technologies, namely, the trade secret protection that often applies to AI modules and prevents access to proprietary elements of the algorithm. Finally, the paper outlines and discusses three ways to mitigate this conflict between the AI explainability principle and tradesecret protection. It encourages law enforcement authorities to be more proactive in ensuring that Face Recognition Technology (FRT) outputs are explainable to different stakeholder groups,especially those directly affected.
@misc{matulionyte_call_2021,
address = {Rochester, NY},
type = {{SSRN} {Scholarly} {Paper}},
title = {A call for more explainable {AI} in law enforcement},
url = {https://papers.ssrn.com/abstract=3974243},
doi = {10.2139/ssrn.3974243},
abstract = {The use of AI in law enforcement raises several significant ethical and legal concerns. One of them is AI explainability principle, which is mentioned in numerous national and international AI ethical guidelines. This paper firstly analyses what AI explainability principle could mean with relation toAI use in law enforcement, namely, to whom, why and how the explanation about the functioning of AI and its outcomes needs to be provided. Secondly, it explores some legal obstacles in ensuring the desired explainability of AI technologies, namely, the trade secret protection that often applies to AI modules and prevents access to proprietary elements of the algorithm. Finally, the paper outlines and discusses three ways to mitigate this conflict between the AI explainability principle and tradesecret protection. It encourages law enforcement authorities to be more proactive in ensuring that Face Recognition Technology (FRT) outputs are explainable to different stakeholder groups,especially those directly affected.},
language = {en},
urldate = {2023-07-24},
author = {Matulionyte, Rita and Hanif, Ambreen},
month = nov,
year = {2021},
note = {1 citations (Crossref) [2023-07-26]},
keywords = {artificial intelligence, explainability, face recognition technology, law enforcement, machine learning, notion, transparency},
}
Downloads: 0
{"_id":"S4Rm7YwMSSnECpRud","bibbaseid":"matulionyte-hanif-acallformoreexplainableaiinlawenforcement-2021","author_short":["Matulionyte, R.","Hanif, A."],"bibdata":{"bibtype":"misc","type":"SSRN Scholarly Paper","address":"Rochester, NY","title":"A call for more explainable AI in law enforcement","url":"https://papers.ssrn.com/abstract=3974243","doi":"10.2139/ssrn.3974243","abstract":"The use of AI in law enforcement raises several significant ethical and legal concerns. One of them is AI explainability principle, which is mentioned in numerous national and international AI ethical guidelines. This paper firstly analyses what AI explainability principle could mean with relation toAI use in law enforcement, namely, to whom, why and how the explanation about the functioning of AI and its outcomes needs to be provided. Secondly, it explores some legal obstacles in ensuring the desired explainability of AI technologies, namely, the trade secret protection that often applies to AI modules and prevents access to proprietary elements of the algorithm. Finally, the paper outlines and discusses three ways to mitigate this conflict between the AI explainability principle and tradesecret protection. It encourages law enforcement authorities to be more proactive in ensuring that Face Recognition Technology (FRT) outputs are explainable to different stakeholder groups,especially those directly affected.","language":"en","urldate":"2023-07-24","author":[{"propositions":[],"lastnames":["Matulionyte"],"firstnames":["Rita"],"suffixes":[]},{"propositions":[],"lastnames":["Hanif"],"firstnames":["Ambreen"],"suffixes":[]}],"month":"November","year":"2021","note":"1 citations (Crossref) [2023-07-26]","keywords":"artificial intelligence, explainability, face recognition technology, law enforcement, machine learning, notion, transparency","bibtex":"@misc{matulionyte_call_2021,\n\taddress = {Rochester, NY},\n\ttype = {{SSRN} {Scholarly} {Paper}},\n\ttitle = {A call for more explainable {AI} in law enforcement},\n\turl = {https://papers.ssrn.com/abstract=3974243},\n\tdoi = {10.2139/ssrn.3974243},\n\tabstract = {The use of AI in law enforcement raises several significant ethical and legal concerns. One of them is AI explainability principle, which is mentioned in numerous national and international AI ethical guidelines. This paper firstly analyses what AI explainability principle could mean with relation toAI use in law enforcement, namely, to whom, why and how the explanation about the functioning of AI and its outcomes needs to be provided. Secondly, it explores some legal obstacles in ensuring the desired explainability of AI technologies, namely, the trade secret protection that often applies to AI modules and prevents access to proprietary elements of the algorithm. Finally, the paper outlines and discusses three ways to mitigate this conflict between the AI explainability principle and tradesecret protection. It encourages law enforcement authorities to be more proactive in ensuring that Face Recognition Technology (FRT) outputs are explainable to different stakeholder groups,especially those directly affected.},\n\tlanguage = {en},\n\turldate = {2023-07-24},\n\tauthor = {Matulionyte, Rita and Hanif, Ambreen},\n\tmonth = nov,\n\tyear = {2021},\n\tnote = {1 citations (Crossref) [2023-07-26]},\n\tkeywords = {artificial intelligence, explainability, face recognition technology, law enforcement, machine learning, notion, transparency},\n}\n\n","author_short":["Matulionyte, R.","Hanif, A."],"key":"matulionyte_call_2021","id":"matulionyte_call_2021","bibbaseid":"matulionyte-hanif-acallformoreexplainableaiinlawenforcement-2021","role":"author","urls":{"Paper":"https://papers.ssrn.com/abstract=3974243"},"keyword":["artificial intelligence","explainability","face recognition technology","law enforcement","machine learning","notion","transparency"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"misc","biburl":"https://api.zotero.org/users/7461051/collections/YGWEDN7F/items?key=JesLwColmDamE3ak4jR0GxhE&format=bibtex&limit=100","dataSources":["p2uqc5vSxe2qN6jKS"],"keywords":["artificial intelligence","explainability","face recognition technology","law enforcement","machine learning","notion","transparency"],"search_terms":["call","more","explainable","law","enforcement","matulionyte","hanif"],"title":"A call for more explainable AI in law enforcement","year":2021}