Why Deep-Learning AIs Are so Easy to Fool. Heaven, D. 574:163–166.
Why Deep-Learning AIs Are so Easy to Fool [link]Paper  doi  abstract   bibtex   
Artificial-intelligence researchers are trying to fix the flaws of neural networks. [Excerpt] [...] Deep-learning systems are increasingly moving out of the lab into the real world, from piloting self-driving cars to mapping crime and diagnosing disease. But pixels maliciously added to medical scans could fool a DNN into wrongly detecting cancer, one study reported this year. Another suggested that a hacker could use these weaknesses to hijack an online AI-based system so that it runs the invader’s own algorithms. [...] But AI researchers knew that DNNs do not actually understand the world. Loosely modelled on the architecture of the brain, they are software structures made up of large numbers of digital neurons arranged in many layers. [...] Many adversarial attacks work by making tiny tweaks to the component parts of an input — such as subtly altering the colour of pixels in an image — until this tips a DNN over into a misclassification. Kohli’s team has suggested that a robust DNN should not change its output as a result of small changes in its input, and that this property might be mathematically incorporated into the network, constraining how it learns. For the moment, however, no one has a fix on the overall problem of brittle AIs. [...] One attempt to address this is to combine DNNs with symbolic AI, which was the dominant paradigm in AI before machine learning. With symbolic AI, machines reasoned using hard-coded rules about how the world worked, such as that it contains discrete objects and that they are related to one another in various ways. [...] Researchers in the field say they are making progress in fixing deep learning’s flaws, but acknowledge that they’re still groping for new techniques to make the process less brittle. [...]
@article{heavenWhyDeeplearningAIs2019,
  title = {Why Deep-Learning {{AIs}} Are so Easy to Fool},
  author = {Heaven, Douglas},
  date = {2019-10-09},
  journaltitle = {Nature},
  volume = {574},
  pages = {163--166},
  doi = {10.1038/d41586-019-03013-5},
  url = {https://doi.org/10.1038/d41586-019-03013-5},
  urldate = {2019-10-10},
  abstract = {Artificial-intelligence researchers are trying to fix the flaws of neural networks.
[Excerpt] [...] Deep-learning systems are increasingly moving out of the lab into the real world, from piloting self-driving cars to mapping crime and diagnosing disease. But pixels maliciously added to medical scans could fool a DNN into wrongly detecting cancer, one study reported this year. Another suggested that a hacker could use these weaknesses to hijack an online AI-based system so that it runs the invader’s own algorithms. [...] But AI researchers knew that DNNs do not actually understand the world. Loosely modelled on the architecture of the brain, they are software structures made up of large numbers of digital neurons arranged in many layers.  [...] Many adversarial attacks work by making tiny tweaks to the component parts of an input — such as subtly altering the colour of pixels in an image — until this tips a DNN over into a misclassification. Kohli’s team has suggested that a robust DNN should not change its output as a result of small changes in its input, and that this property might be mathematically incorporated into the network, constraining how it learns. For the moment, however, no one has a fix on the overall problem of brittle AIs. [...] One attempt to address this is to combine DNNs with symbolic AI, which was the dominant paradigm in AI before machine learning. With symbolic AI, machines reasoned using hard-coded rules about how the world worked, such as that it contains discrete objects and that they are related to one another in various ways. [...] Researchers in the field say they are making progress in fixing deep learning’s flaws, but acknowledge that they’re still groping for new techniques to make the process less brittle. [...]},
  keywords = {~INRMM-MiD:z-W2T9RXKF,artificial-intelligence,artificial-neural-networks,brittle-artificial-intelligence,robust-modelling,semantics,uncertainty-propagation,unexpected-effect},
  langid = {english}
}

Downloads: 0