Artificial Intelligence: Great Potential or Hitting Its Limits?
Interest in high technology in today’s world is as cyclical as the stock market or the changing seasons. High expectations, fueled by advertising and scientific enthusiasm, are often followed by a wave of skepticism. According to some experts, artificial intelligence (AI) technology is currently experiencing a downturn. Skeptics claim that the field’s potential has been exaggerated and that the grand promises, which once seemed within reach, have not been fulfilled. “After several years of hype, many people think the AI project has failed,” says The Economist columnist Tim Cross. So, what problems with AI are worrying experts? Is the development of artificial intelligence truly limited by fundamental barriers, or is this just a pause before the next technological breakthrough?
It could be as if the world created a second China—not made up of billions of people and millions of factories, but of algorithms and humming computers. Consulting firm PwC predicts that by 2030, artificial intelligence will add $16 trillion to the global economy. For comparison, the total GDP of the world’s second-largest economy (from banking and biotech to retail and construction) was only $13 trillion in 2018.
PwC’s statement is not unique. McKinsey analysts offer a similar figure of $13 trillion. Others prefer to assess the impact qualitatively rather than quantitatively. Sundar Pichai, CEO of Google, called AI achievements “more profound than fire or electricity.” Other forecasts predict equally significant changes, but with less optimism. Smart computers capable of doing the work of radiologists, truck drivers, or warehouse workers could trigger a wave of unemployment.
However, doubts have recently arisen about whether today’s AI technology is truly transforming the world as much as it seems. AI faces certain limitations and cannot deliver on some of its most ambitious promises.
There’s no doubt that AI—more specifically, machine learning, one of its subfields—has made significant progress. Computers have surpassed humans in some areas where they once competed. The academic buzz began in the early 2010s, when new machine learning methods led to rapid improvements in tasks like image recognition and language processing. From there, these advances spread to business, especially among internet giants. With vast computing resources and huge amounts of data, these companies were well-positioned to adopt the technology. Modern AI methods are now used in search engines and voice assistants, suggest email replies, power facial recognition systems for unlocking smartphones and monitoring national borders, and support algorithms that identify unwanted posts on social media.
Perhaps the most striking demonstration of this technology’s potential came in 2016, when a system developed by DeepMind—a London-based AI company owned by Alphabet, Google’s parent company—defeated one of the world’s best players in the ancient Asian board game Go. Tens of millions watched the match. The breakthrough happened years, even decades, earlier than AI experts had expected.
As Pichai’s comparison to electricity and fire suggests, machine learning is a general-purpose technology capable of impacting the entire economy. The technology excels at recognizing patterns in data, which is useful everywhere. Ornithologists use it to classify bird songs; astronomers hunt for planets in the flicker of starlight; banks assess credit risks and prevent fraud. In the Netherlands, authorities use AI to monitor social benefits. In China, AI-powered facial recognition systems let shoppers buy groceries and help manage the repressive mass surveillance system built in Xinjiang, a region with a predominantly Muslim population.
AI advocates say that further transformations—good or bad—are still ahead. In 2016, Geoffrey Hinton, a computer scientist who made fundamental contributions to AI, remarked that “it’s obvious we should stop training radiologists,” arguing that computers would soon do the same work, only cheaper and faster. Meanwhile, developers of self-driving cars predict that robotaxis will revolutionize transportation. Eric Schmidt, former chairman of Google, hopes AI will accelerate research and help scientists keep up with the flood of data.
In January 2020, a group of researchers published a paper in the journal Cell describing an AI system that predicted the antibacterial properties of a chemical compound by analyzing its molecular structure. Of 100 candidate molecules selected by the system for further analysis, one turned out to be a powerful new antibiotic. The COVID-19 pandemic sparked intense interest in such medical AI applications. BlueDot, an AI company, claims it detected signs of the new virus in Chinese hospital reports as early as December of the previous year. Researchers scrambled to apply AI to everything from drug discovery to interpreting medical images and predicting how the virus might evolve.
This is not the first wave of AI hype. The field began in the mid-1950s, when scientists hoped it would take only a few years—or at most a couple of decades—to create human-level intelligence. That early optimism faded by the 1970s. A second wave began in the 1980s, but again, the grandest promises went unfulfilled. When reality failed to match the hype, the boom gave way to painful downturns known as “AI winters.” Research funding dried up, and the field’s reputation suffered.
Modern AI technologies have been much more successful. Billions of people use them every day, often without noticing, in apps on their smartphones. Still, despite this success, the fact remains: many of the boldest claims about AI have once again not come true. Confidence is wavering as researchers begin to wonder if the technology has hit a ceiling. Self-driving cars have become more capable but remain just short of being safe enough for everyday streets. Attempts to integrate AI into medical diagnostics are also taking longer than expected: despite Dr. Hinton’s prediction, there is still a global shortage of radiologists.
Reviewing the state of medical AI in 2019, Eric Topol, a cardiologist and tech enthusiast, wrote that “the state of AI hype far exceeds the state of the science, especially when it comes to validation and readiness for patient care.” Despite a flood of new ideas, the fight against COVID-19 has mostly relied on old tools already at hand. Contact tracing was done with electronic bracelets and phone calls. Clinical trials focused on existing drugs. Plastic screens and paint on sidewalks enforced simple distancing rules.
Consultants predicting AI’s impact on the world also report that managers in modern companies find it hard to implement AI as enthusiasm cools. Svetlana Sicular of research firm Gartner says 2020 could be the year AI enters a “trough of disillusionment” in the hype cycle. Investors are waking up and trying to exit before the downturn. A study of European AI startups by venture fund MMC found that 40% of business projects appeared not to use AI at all. “I think there’s definitely a strong element of ‘investment marketing’ here,” one analyst admits diplomatically.
While modern AI methods are powerful technological tools, they also have limitations, can be problematic, and are difficult to use. Those hoping to harness AI’s potential face two main groups of problems.
The first group involves practical issues. The machine learning revolution was based on three things: improved algorithms, more powerful computers to run them, and—thanks to the gradual digitization of society—more data to learn from. But data isn’t always available. For example, it’s hard to use AI to track COVID-19 transmission without a complete database of everyone’s movements. Even when data does exist, it may contain hidden assumptions that can mislead careless users. The latest AI systems’ need for computing power remains expensive. Large organizations always take time to integrate new technologies: think of the rollout of power plants in the 20th century or cloud storage in the 21st. None of this negates AI’s potential, but it does slow its adoption.
The second group of problems is deeper and concerns the algorithms themselves. Machine learning uses thousands or millions of examples to improve a software model (whose structure is largely based on the brain’s neural architecture). The resulting systems can perform some tasks, like image or speech recognition, much more reliably than systems programmed with hand-crafted rules, but they are not “intelligent” in the way most people understand the term. They are powerful pattern-recognition tools, but they lack many cognitive abilities that biological brains take for granted. They struggle with logical reasoning, generalizing the rules they find, and with the broad skill that researchers, for lack of a better term, call “common sense.” The result is an artificial idiot savant that can excel at narrowly defined tasks but can make major mistakes when faced with unexpected input.
Without a new technological breakthrough, these shortcomings impose fundamental limits on AI’s capabilities. Self-driving cars, which must navigate an ever-changing world, are already delayed and may never fully arrive. Language-based systems like chatbots and personal assistants are built on statistical approaches that create only a superficial understanding, disconnected from reality. This limits their usefulness. Existential fears that smart computers will make radiologists or truck drivers obsolete—or, as some alarmists suggest, threaten humanity’s survival—seem exaggerated. Predictions that the AI market will reach the size of China’s GDP look implausible.
Today’s “AI summer” is different from previous ones. It’s brighter and warmer because the technology is more widely used. Another full-blown winter is unlikely. But the autumn breeze is picking up.