A call for more explainable AI in law enforcement. Matulionyte, R. & Hanif, A. November, 2021. 1 citations (Crossref) [2023-07-26]
A call for more explainable AI in law enforcement [link]Paper  doi  abstract   bibtex   
The use of AI in law enforcement raises several significant ethical and legal concerns. One of them is AI explainability principle, which is mentioned in numerous national and international AI ethical guidelines. This paper firstly analyses what AI explainability principle could mean with relation toAI use in law enforcement, namely, to whom, why and how the explanation about the functioning of AI and its outcomes needs to be provided. Secondly, it explores some legal obstacles in ensuring the desired explainability of AI technologies, namely, the trade secret protection that often applies to AI modules and prevents access to proprietary elements of the algorithm. Finally, the paper outlines and discusses three ways to mitigate this conflict between the AI explainability principle and tradesecret protection. It encourages law enforcement authorities to be more proactive in ensuring that Face Recognition Technology (FRT) outputs are explainable to different stakeholder groups,especially those directly affected.
@misc{matulionyte_call_2021,
	address = {Rochester, NY},
	type = {{SSRN} {Scholarly} {Paper}},
	title = {A call for more explainable {AI} in law enforcement},
	url = {https://papers.ssrn.com/abstract=3974243},
	doi = {10.2139/ssrn.3974243},
	abstract = {The use of AI in law enforcement raises several significant ethical and legal concerns. One of them is AI explainability principle, which is mentioned in numerous national and international AI ethical guidelines. This paper firstly analyses what AI explainability principle could mean with relation toAI use in law enforcement, namely, to whom, why and how the explanation about the functioning of AI and its outcomes needs to be provided. Secondly, it explores some legal obstacles in ensuring the desired explainability of AI technologies, namely, the trade secret protection that often applies to AI modules and prevents access to proprietary elements of the algorithm. Finally, the paper outlines and discusses three ways to mitigate this conflict between the AI explainability principle and tradesecret protection. It encourages law enforcement authorities to be more proactive in ensuring that Face Recognition Technology (FRT) outputs are explainable to different stakeholder groups,especially those directly affected.},
	language = {en},
	urldate = {2023-07-24},
	author = {Matulionyte, Rita and Hanif, Ambreen},
	month = nov,
	year = {2021},
	note = {1 citations (Crossref) [2023-07-26]},
	keywords = {artificial intelligence, explainability, face recognition technology, law enforcement, machine learning, notion, transparency},
}

Downloads: 0