Has ‘moderate’ artificial intelligence in diagnostic imaging arrived?

June 16, 2016
By Eliot Siegel

The crushing defeat of planet Earth’s two best human Jeopardy! Players by IBM Watson’s Deep Q/A system brought the term “Artificial Intelligence (AI)” back into the common vernacular in 2011. Recently there has been a renaissance of academic and commercial activity in this area and a spike in startup as well as larger, established companies claiming major breakthroughs in technology which has included making such “trash-talking” claims as, “the days of the protoplasm known as radiologist are numbered.”

Merriam-Webster defines AI as a “branch of computer science dealing with the simulation of intelligent behavior in computers.” One of the major problems with this definition is the difficulty in describing intelligence itself. IQ tests primarily evaluate parameters such as speed and memory, and arithmetic and language understanding, tasks that can already be accomplished by a computer program.



In fact, “AI” programs have demonstrated that they can achieve IQ scores comparable to the “graduate student” level. There are also compelling examples of computer art, with software such as “Aaron” by Harold Cohen, and computers composing music, such as Melomics 109. But can AI programs exhibit true creativity, common sense and reasoning whether they are “conscious” or not? It is also important to make the distinction between so called “strong” AI and “weak,” also known as “narrow,” AI.

Weak AI is defined by Wikipedia as “non-sentient computer intelligence or AI that is focused on one narrow task.” IBM’s Deep Blue chess program that beat Garry Kasparov in 1997 was a high-profile example of advanced AI. Apple’s “Siri” is another, but more sophisticated, example of weak AI. Strong AI has been defined as a system with “consciousness, sentience, and mind” or “artificial general intelligence.” This is depicted in science fiction by movies such as “Ex Machina,” “A.I. Artifical Intelligence,” “Her,” and, of course, “2001: A Space Odyssey.”

Technology giants such as Elon Musk (artificial intelligence is our “greatest existential threat”), Bill Gates (“I am in the camp that is concerned”), Stephen Hawking and others have signed a letter stating AI can be potentially more dangerous than nuclear weapons, yet ironically, some of these same technology experts are investing heavily in the development of advanced AI. By far the most exciting real world developments have, in my opinion, been examples of what I would refer to as emerging “moderate AI.”

One example of this is the program, created by the Google Deep Mind team and described in the journal Nature, which learned to play Atari 2600 games using a “deep Q-network” that learned using “end-to-end reinforcement learning” — by essentially watching its score as a human player would. It was able to “teach itself” not only how to play, but to master each of 49 games at a level that was comparable to or better than a human professional games tester. This ability to create software that is capable of “learning to excel at a diverse array of challenging tasks” has fascinating and important implications.

Several startup and larger companies are now claiming an analogous achievement to the development of a “deep Q-network” for their ability to use large numbers of imaging studies to “learn” how to interpret them for diagnosis. I am very skeptical of these current claims and am not aware of any companies that come close to this capability. In fact, no one has responded to my challenge that I’ll go anywhere to wash the car of anyone who can defeat a fifth grader at simply finding the adrenal gland on CT. The task of image interpretation is far more complex than recognizing a score on a TV screen with a joystick controller and red button. What I have seen these companies do is “re-invent/ re-discover” approaches that have been utilized in “weak AI,” using statistical and “machine learning” algorithms that have already been well described and developed in the scientific literature for decades.

AI doesn’t need to be limited to making findings on a diagnostic imaging study. There are many areas in which our interoperability, graphical user interfaces and decision support systems can benefit tremendously from weak AI, to say nothing of “moderate” AI. This will be our area of greatest opportunity and potential in the next several years, and I believe it will change the practice of radiology many years before we start talking about revolutionary, rather than evolutionary, advances in diagnostic image interpretation. I have been asked to present this year on the following topics at two industry events:

• SIIM Closing Session: “Peering into the Future through the Looking Glass of Artificial Intelligence,” Portland, Oregon, July 2016.
• RSNA 2016: Controversy Session Topic: “Elementary, My Dear Watson: Will Machines Replace Radiologists?” and “The Promise of Machine Learning (and pattern recognition) in Radiology.”

About the author: Eliot Siegel is a professor at the University of Maryland School of Medicine, Department of Diagnostic Radiology and Nuclear Medicine. He also works for the VA Maryland Healthcare System in Baltimore.