From the June 2016 issue of HealthCare Business News magazine
By Eliot Siegel
The crushing defeat of planet Earth’s two best human Jeopardy! Players by IBM Watson’s Deep Q/A system brought the term “Artificial Intelligence (AI)” back into the common vernacular in 2011. Recently there has been a renaissance of academic and commercial activity in this area and a spike in startup as well as larger, established companies claiming major breakthroughs in technology which has included making such “trash-talking” claims as, “the days of the protoplasm known as radiologist are numbered.”
Merriam-Webster defines AI as a “branch of computer science dealing with the simulation of intelligent behavior in computers.” One of the major problems with this definition is the difficulty in describing intelligence itself. IQ tests primarily evaluate parameters such as speed and memory, and arithmetic and language understanding, tasks that can already be accomplished by a computer program.
In fact, “AI” programs have demonstrated that they can achieve IQ scores comparable to the “graduate student” level. There are also compelling examples of computer art, with software such as “Aaron” by Harold Cohen, and computers composing music, such as Melomics 109. But can AI programs exhibit true creativity, common sense and reasoning whether they are “conscious” or not? It is also important to make the distinction between so called “strong” AI and “weak,” also known as “narrow,” AI.
Weak AI is defined by Wikipedia as “non-sentient computer intelligence or AI that is focused on one narrow task.” IBM’s Deep Blue chess program that beat Garry Kasparov in 1997 was a high-profile example of advanced AI. Apple’s “Siri” is another, but more sophisticated, example of weak AI. Strong AI has been defined as a system with “consciousness, sentience, and mind” or “artificial general intelligence.” This is depicted in science fiction by movies such as “Ex Machina,” “A.I. Artifical Intelligence,” “Her,” and, of course, “2001: A Space Odyssey.”
Technology giants such as Elon Musk (artificial intelligence is our “greatest existential threat”), Bill Gates (“I am in the camp that is concerned”), Stephen Hawking and others have signed a letter stating AI can be potentially more dangerous than nuclear weapons, yet ironically, some of these same technology experts are investing heavily in the development of advanced AI. By far the most exciting real world developments have, in my opinion, been examples of what I would refer to as emerging “moderate AI.”
One example of this is the program, created by the Google Deep Mind team and described in the journal Nature, which learned to play Atari 2600 games using a “deep Q-network” that learned using “end-to-end reinforcement learning” — by essentially watching its score as a human player would. It was able to “teach itself” not only how to play, but to master each of 49 games at a level that was comparable to or better than a human professional games tester. This ability to create software that is capable of “learning to excel at a diverse array of challenging tasks” has fascinating and important implications.