Stephen Hawking: 'Transcendence Looks at the Implications of Artificial Intelligence - but Are We Taking AI Seriously Enough?'. Hawking, S., Russell, S., Tegmark, M., & Wilczek, F. 2014(05-01):9313474+.
Stephen Hawking: 'Transcendence Looks at the Implications of Artificial Intelligence - but Are We Taking AI Seriously Enough?' [link]Paper  abstract   bibtex   
Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks, says a group of leading scientists. [Excerpt] Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring. [\n] The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. [...] [\n] Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. [...] One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. [...] Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.
@article{hawkingStephenHawkingTranscendence2014,
  title = {Stephen {{Hawking}}: '{{Transcendence}} Looks at the Implications of Artificial Intelligence - but Are We Taking {{AI}} Seriously Enough?'},
  author = {Hawking, Stephen and Russell, Stuart and Tegmark, Max and Wilczek, Frank},
  date = {2014-05},
  journaltitle = {The Independent},
  volume = {2014},
  pages = {9313474+},
  issn = {0951-9467},
  url = {http://ind.pn/1i3aWlU},
  abstract = {Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks, says a group of leading scientists.

[Excerpt] Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

[\textbackslash n] The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. [...]

[\textbackslash n] Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. [...] One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. [...] Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.},
  keywords = {*imported-from-citeulike-INRMM,~INRMM-MiD:c-13524721,anthropocene,anthropogenic-changes,artificial-intelligence,science-ethics,tipping-point},
  number = {05-01}
}

Downloads: 0