Wikipedia:Peer review/Progress in artificial intelligence/archive1

From Wikipedia, the free encyclopedia

Progress in artificial intelligence[edit]

This article was created in March 2008 and is based around a 4 point categorisation of progress (more recently 5), which could be considered as a piece of original research, and is expanded from a section in the AI article (which still shows the original). Requests for citations on the AI page were posted in 2011 and eventually filled in 2012 by reference to a 2011 refereed paper from a management journal. Comparing the wording it is obvious to me that this was lifted straight out of Wikipedia. In discussions with the original author (see Category names, Super-human?) he has argued for 4 other citations post 2011, although two of them are direct quotations from Wikipedia, as a reason for maintaining the original names.

What do we do about it? I think it very unlikely there will be any earlier citation for this categorisation but it might soon become part of the lore of AI. Is it ever valid to use a citation which is based on material in Wikipedia? (My original reason for contributing to the article was a disagreement with the way that the term super-human was used, but this is a secondary issue.)

Thanks, Chris55 (talk) 11:46, 7 February 2017 (UTC)[reply]

Comments by EthanMacdonald[edit]

I do admire the intention to create a page documenting progress in artificial intelligence. However, it is an ambitious goal.

No unified framework for reasoning about progress in AI has emerged yet. While the original author has listed one or two references, one or two references (which are relatively obscure considering the scope of the topic and the weight they are given in the article) is hardly grounds to declare scientific consensus. I believe it is important to ensure this lack of consensus is conveyed within the article, so that readers walk away with a balanced and neutral understanding of the subject (which they can use to form their own opinions). As such, I think it would be better for the article to critically discuss various approaches to reasoning about AI progress without settling on any one method or framework.

In my opinion, this article isn't really there yet. For example, it doesn't question the assumption that AI should be evaluated against humans. Why should I accept that premise when non-human benchmarks like MNIST, CIFAR, or ImageNet are abundantly common in top machine learning conferences (e.g., NIPS, ICML)? I'm not convinced. Personally, I think the classification scheme should be removed. At most, it should be a subsection describing one particular way of reasoning about AI (proposed by one or two people), out of a wide array of possible options.

Perhasps rather than categorizing notable achievements under the current headings, it would be ideal to describe various approaches to measuring or reasoning about AI progress (in all it's many varieties), and then illustrate those approaches with notable achievements, methods, and benchmarks instead:

  • Turing Tests & Comparisons to Human Intelligence
  • Formal Methods
  • Validation Error
  • Reconstruction Error
  • Game Theoretic Approaches
  • Regret
  • Sample Complexity
  • Computational Efficiency
  • Human Interpretability
  • Data/Task Benchmarks
  • Interdisciplinary Studies
  • Industry Adoption & Applications
  • AI Ethics, Law, & Pubic Policy

Another approach might be to split the article up into categories based on AI tasks and/or various applications of AI in society. In that case, something like:

Tasks:

  • Computer Vision
  • Natural Language Processing
  • Games
  • Robotics
  • Etc.

Applications:

  • Recommendation Engines
  • Medicine
  • Personal Assistants
  • Law
  • Military & Policing
  • Biology
  • Physics
  • Engineering
  • Transportation
  • Etc.

EthanMacdonald (talk) 09:53, 12 February 2017 (UTC)[reply]