Has Artificial Intelligence Become Alchemy?. Hutson, M. 360(6388):478.
Has Artificial Intelligence Become Alchemy? [link]Paper  doi  abstract   bibtex   
Ali Rahimi, a researcher in artificial intelligence (AI) at Google in San Francisco, California, has charged that machine learning algorithms, in which computers learn through trial and error, have become a form of "alchemy." Researchers, he says, do not know why some algorithms work and others don't, nor do they have rigorous criteria for choosing one AI architecture over another. Now, in a paper presented on 30 April at the International Conference on Learning Representations in Vancouver, Canada, Rahimi and his collaborators document examples of what they see as the alchemy problem and offer prescriptions for bolstering AI's rigor. The issue is distinct from AI's reproducibility problem, in which researchers can't replicate each other's results because of inconsistent experimental and publication practices. It also differs from the "black box" or "interpretability" problem in machine learning: the difficulty of explaining how a particular AI has come to its conclusions.
@article{hutsonHasArtificialIntelligence2018,
  title = {Has Artificial Intelligence Become Alchemy?},
  author = {Hutson, Matthew},
  date = {2018-05},
  journaltitle = {Science},
  volume = {360},
  pages = {478},
  issn = {1095-9203},
  doi = {10.1126/science.360.6388.478},
  url = {https://doi.org/10.1126/science.360.6388.478},
  abstract = {Ali Rahimi, a researcher in artificial intelligence (AI) at Google in San Francisco, California, has charged that machine learning algorithms, in which computers learn through trial and error, have become a form of "alchemy." Researchers, he says, do not know why some algorithms work and others don't, nor do they have rigorous criteria for choosing one AI architecture over another. Now, in a paper presented on 30 April at the International Conference on Learning Representations in Vancouver, Canada, Rahimi and his collaborators document examples of what they see as the alchemy problem and offer prescriptions for bolstering AI's rigor. The issue is distinct from AI's reproducibility problem, in which researchers can't replicate each other's results because of inconsistent experimental and publication practices. It also differs from the "black box" or "interpretability" problem in machine learning: the difficulty of explaining how a particular AI has come to its conclusions.},
  keywords = {*imported-from-citeulike-INRMM,~INRMM-MiD:c-14580706,algorithm-engineering,artificial-intelligence,competition,computational-science-literacy,engineering,epistemology,machine-learning,mathematical-reasoning,no-free-lunch-theorem,peer-review,programming,publication-bias,publish-or-perish,reproducibility,reproducible-research,rewarding-best-research-practices,science-literacy,theory-vs-actual-implemetation},
  number = {6388}
}
Downloads: 0