More Accountability for Big-Data Algorithms. Nature 537(7621):449.
More Accountability for Big-Data Algorithms [link]Paper  doi  abstract   bibtex   
To avoid bias and improve transparency, algorithm designers must make data sources and profiles public. [Excerpt] [...] Algorithms, from the simplest to the most complex, follow sets of instructions or learn to accomplish a goal. In principle, they could help to make impartial analyses and decisions by reducing human biases and prejudices. But there is growing concern that they risk doing the opposite, and will replicate and exacerbate human failings [...]. And in an era of powerful computers, machine learning and big data, these equations have taken on a life of their own. [Bias in, bias out] [...] There are many sources of bias in algorithms. One is the hard-coding of rules and use of data sets that already reflect common societal spin. Put bias in and get bias out. Spurious or dubious correlations are another pitfall. [...] [] [...] a strong movement for greater 'algorithmic accountability' is now under way in academia and, to their credit, parts of the tech industry such as Google and Microsoft. This has been spurred largely by the increasing pace and adoption of machine learning and other artificial-intelligence (AI) techniques. A sensible step in the direction of greater transparency would be for the designers of algorithms to make public the source of the data sets they use to train and feed them. Disclosure of the design of the algorithms themselves would open these up to scrutiny, but is almost certain to collide with companies' desire to protect their secrets (and prevent gaming). Researchers hope to find ways to audit for bias without revealing the algorithms. [..] As with the use of science metrics in research assessment, a simplistic over-reliance on algorithms is heavily flawed. It's clear that the (vastly more complex) algorithms that help to drive the rest of the world are here to stay. Indeed, ubiquitous and even more sophisticated AI algorithms are already in view. Society needs to discuss in earnest how to rid software and machines of human bugs.
@article{natureMoreAccountabilityBigdata2016,
  title = {More Accountability for Big-Data Algorithms},
  author = {{Nature}},
  date = {2016-09},
  journaltitle = {Nature},
  volume = {537},
  pages = {449},
  issn = {0028-0836},
  doi = {10.1038/537449a},
  url = {https://doi.org/10.1038/537449a},
  abstract = {To avoid bias and improve transparency, algorithm designers must make data sources and profiles public.

[Excerpt] [...] Algorithms, from the simplest to the most complex, follow sets of instructions or learn to accomplish a goal. In principle, they could help to make impartial analyses and decisions by reducing human biases and prejudices. But there is growing concern that they risk doing the opposite, and will replicate and exacerbate human failings [...]. And in an era of powerful computers, machine learning and big data, these equations have taken on a life of their own. 

[Bias in, bias out]

[...] There are many sources of bias in algorithms. One is the hard-coding of rules and use of data sets that already reflect common societal spin. Put bias in and get bias out. Spurious or dubious correlations are another pitfall. [...]

[] [...] a strong movement for greater 'algorithmic accountability' is now under way in academia and, to their credit, parts of the tech industry such as Google and Microsoft. This has been spurred largely by the increasing pace and adoption of machine learning and other artificial-intelligence (AI) techniques. A sensible step in the direction of greater transparency would be for the designers of algorithms to make public the source of the data sets they use to train and feed them. Disclosure of the design of the algorithms themselves would open these up to scrutiny, but is almost certain to collide with companies' desire to protect their secrets (and prevent gaming). Researchers hope to find ways to audit for bias without revealing the algorithms. [..] As with the use of science metrics in research assessment, a simplistic over-reliance on algorithms is heavily flawed. It's clear that the (vastly more complex) algorithms that help to drive the rest of the world are here to stay. Indeed, ubiquitous and even more sophisticated AI algorithms are already in view. Society needs to discuss in earnest how to rid software and machines of human bugs.},
  keywords = {*imported-from-citeulike-INRMM,~INRMM-MiD:c-14143609,algorithmic-accountability,big-data,communicating-uncertainty,data-uncertainty,modelling-uncertainty,open-data,open-science,science-ethics,science-policy-interface,science-society-interface,scientific-communication,uncertainty},
  number = {7621}
}

Downloads: 0