Ignorance and the regulation of artificial intelligence. White, J. & Lidskog, R. Journal of Risk Research, 2021.
doi  abstract   bibtex   
Much has been written about the risks posed by artificial intelligence (AI). This article is interested not only in what is known about these risks, but what remains unknown and how that unknowing is and should be approached. By reviewing and expanding on the scientific literature, it explores how social knowledge contributes to the understanding of AI and its regulatory challenges. The analysis is conducted in three steps. First, the article investigates risks associated with AI and shows how social scientists have challenged technically-oriented approaches that treat the social instrumentally. It then identifies the invisible and visible characteristics of AI, and argues that not only is it hard for outsiders to comprehend risks attached to the technology, but also for developers and researchers. Finally, it asserts the need to better recognise ignorance of AI, and explores what this means for how their risks are handled. The article concludes by stressing that proper regulation demands not only independent social knowledge about the pervasiveness, economic embeddedness and fragmented regulation of AI, but a social non-knowledge that is attuned to its complexity, and inhuman and incomprehensible behaviour. In properly allowing for ignorance of its social implications, the regulation of AI can proceed in a more modest, situated, plural and ultimately robust manner. © 2021 Informa UK Limited, trading as Taylor & Francis Group.
@article{white_ignorance_2021,
	title = {Ignorance and the regulation of artificial intelligence},
	issn = {1366-9877},
	doi = {10.1080/13669877.2021.1957985},
	abstract = {Much has been written about the risks posed by artificial intelligence (AI). This article is interested not only in what is known about these risks, but what remains unknown and how that unknowing is and should be approached. By reviewing and expanding on the scientific literature, it explores how social knowledge contributes to the understanding of AI and its regulatory challenges. The analysis is conducted in three steps. First, the article investigates risks associated with AI and shows how social scientists have challenged technically-oriented approaches that treat the social instrumentally. It then identifies the invisible and visible characteristics of AI, and argues that not only is it hard for outsiders to comprehend risks attached to the technology, but also for developers and researchers. Finally, it asserts the need to better recognise ignorance of AI, and explores what this means for how their risks are handled. The article concludes by stressing that proper regulation demands not only independent social knowledge about the pervasiveness, economic embeddedness and fragmented regulation of AI, but a social non-knowledge that is attuned to its complexity, and inhuman and incomprehensible behaviour. In properly allowing for ignorance of its social implications, the regulation of AI can proceed in a more modest, situated, plural and ultimately robust manner. © 2021 Informa UK Limited, trading as Taylor \& Francis Group.},
	language = {English},
	journal = {Journal of Risk Research},
	author = {White, J.M. and Lidskog, R.},
	year = {2021},
	keywords = {10 Ignorance, uncertainty and risk, Artificial intelligence, PRINTED (Fonds papier), ignorance, non-knowledge, risk regulation},
}

Downloads: 0