For decades scientists were stunned and citizens feared the power of computers. In 1965, Herbert Simon, winner of the Nobel Prize in economics and also winner of the Turing Prize (considered “the Nobel Prize for computing”), predicted that “machines will be able, within 20 years, to do any job a man can do.” His misplaced faith in computers is not unique. Sixty-seven years later, we are still waiting for computers to become our slaves and masters.
Companies have spent hundreds of billions of dollars on crashing and burning AI moonshots. “Dr. Watson” was supposed to revolutionize health care and “eradicate cancer.” Eight years later, after spending $15 billion with no demonstrable success, IBM fired Dr. Watson.
In 2016, Turing Award winner Geoffrey Hinton said, “We should stop training radiologists now. It is quite obvious that within five years, deep learning will outperform radiologists. Six years later, the number of radiologists has increased, not decreased. Researchers have spent billions of dollars working on thousands of X-ray image recognition algorithms that aren’t as good as human radiologists.
What about those self-driving vehicles, promised by many, including Elon Musk in his 2016 boast: “I really see self-driving as a problem solved. I think we’re probably less than two years away. Six years later, arguably the most advanced autonomous vehicles are Waymos in San Francisco, which only operate between 10 p.m. and 6 a.m. on the roads less traveled and still have accidents and cause traffic jams. They fall far short of operating successfully in downtown mid-day traffic at the required skill level of 99.9999%.
The list continues. Zillow’s house-flipping mishap has lost billions of dollars trying to revolutionize buying a home before closing it. Carvana’s stunning car gamble still loses billions.
We’ve argued for years that we should develop AI that makes people more productive instead of trying to replace them. Computers have wonderful memories, perform lightning-fast, error-free calculations, and are tireless, but humans have the real-world experience, common sense, wisdom, and critical thinking skills that computers lack. . Together they can do more than either could do on their own.
The daily news flash
Days of the week
Find the five best stories of the day every afternoon of the week.
Effective augmentation finally seems to be happening with medical images. A large-scale study just published in Lancet Digital Health is the first to directly compare AI cancer screening when used alone or to help humans. The software comes from a German startup, Vara, whose AI is already used in more than 25% of breast cancer screening centers in Germany.
Researchers from Vara, the University of Essen and Memorial Sloan Kettering Cancer Center trained the algorithm on more than 367,000 mammograms and then tested it on 82,851 mammograms that had been withheld for this purpose.
In the first strategy, the algorithm was used alone to analyze the 82,851 mammograms. In the second strategy, the algorithm separated the mammograms into three groups: clearly cancer, clearly no cancer, and uncertain. The questionable mammograms were then sent to board-certified radiologists who received no information about the IA diagnosis.
Doctors and AI working together have proven to be better than each working alone. AI pre-screening reduced the number of images doctors reviewed by 37% while reducing false positive and false negative rates by approximately one-third compared to AI alone and by 14% at 20% compared to doctors alone. Less work and better results!
As machine learning improves, AI X-ray analysis will undoubtedly become more efficient and accurate. There will come a time when AI can be trusted to work on its own. However, that time is likely to be decades in the future and attempts to jump straight to that point are dangerous.
We are convinced that the productivity of many workers can be improved by similar strategies of increase – not to mention the fact that many of the tasks in which computers excel are a terrible chore; for example, legal research, inventory control and statistical calculations. But far too many attempts to replace humans entirely have not only been a huge waste of resources, but have also undermined the credibility of AI research. The last thing we need is another AI winter where funding dries up, resources are diverted, and the enormous potential of these technologies is put on hold. We are optimistic that the cumulative failures of moonshots and successes of augmentation strategies will change the way we think about AI.
Funk is a freelance technology consultant who previously taught at the National University of Singapore, Hitotsubashi and Kobe universities in Japan, and Penn State, where he taught courses on the economics of new technologies. Smith is the author of “The Illusion of AI” and co-author (with Jay Cordes) of “The 9 Pitfalls of Data Science” and “The Ghost Pattern Problem.”