Some of the most advanced artificial intelligence (AI) systems in the world, at least the ones the public hears about, are famous for beating human players at chess or poker. Other algorithms are known for their ability to learn to recognize cats or their inability to recognize people with darker skin.
But are current AI systems more than toys? Sure, their ability to play games or identify animals is impressive, but does that help create useful AI systems? To answer this, we need to take a step back and ask ourselves about the purposes of AI.
AI tries to predict the future by analyzing the past
The basic idea behind AI is simple: analyze patterns from the past to make accurate predictions about the future.
This idea underpins every algorithm, from Google showing you ads of what it predicts you want to buy, to predicting whether an image of a face is you or your neighbor. AI is also being used to predict whether patients have cancer or not by analyzing medical records and scans.
Pluribus, the poker player robot, was able to beat the best poker players in the world in 2019 by being able to predict that it could outperform humans.
Making predictions requires incredible amounts of data and the ability to process it quickly. Pluribus, for example, filters data from billions of decks of cards in milliseconds. It assembles patterns to predict the best possible hand to play, always looking at its historical data to accomplish the task at hand, never wondering what it means to look ahead.
Pluribus, AlphaGo, Amazon Rekognition – there are plenty of algorithms that are incredibly good at their job, some so good they can beat human experts.
All of these examples prove how powerful AI can be in making predictions. The question is what task do you want it to be good at.
Human intelligence is general, artificial intelligence narrow
AI systems can really only do one job. Pluribus, for example, is so task-specific that he can’t even play another card game like blackjack, let alone drive a car or plan for world domination.
It is very different from human intelligence. One of our main characteristics is that we can generalize. We become highly skilled in different skills throughout life ̶― learning everything from how to walk, to play cards, or to write articles. We may specialize in a few of these skills, or even make a career out of them, but we are still capable of learning and accomplishing other tasks in our lives.
Moreover, we can also transfer skills, using knowledge of one thing to gain skills in another. AI systems basically don’t work that way. They learn by endless repetition, or at least until the energy bill becomes too high, thus improving the accuracy of predictions through billions of iterations and computational weight.
If developers want AI to be as versatile as human intelligence, then AI must begin to have more generalizable and transferable intelligence.
General artificial intelligence
And the narrowness of AI is changing. What is about to revolutionize computing is Artificial General Intelligence (AIG). Just like humans, AGIs will be able to perform multiple tasks at once, each of them at an expert level.
AGIs like this have yet to be developed, but according to Irina Higgins, researcher at Google subsidiary DeepMind, we’re not far off.
“10-15 years ago people thought AGI was a crazy chimera. They thought it was 1,500 years from now, maybe never. But this is happening in our lifetime,” Higgins told DW.
Modest plans are to use AGI to help us solve really big science problems, like space exploration or curing cancer.
But the more you read about the potential of AGI, the more the narrative becomes more science fiction than science – think of silicon, plastic and metal beings calling themselves humans or supercomputers managing citywide bureaucracies.
Transformative AI expands artificial intelligence
While AGI leans more towards science fiction, developments in the field of transformative AI belong firmly in the non-fiction category.
“Even though AI is very, very task-specific, people are expanding the tasks that a computer can perform,” Eng Lim Goh, chief technology officer at Hewlett Packard Enterprise, told DW.
One of the first transformative AI systems already in use is Large Language Models (LLM).
“LLMs started by automatically correcting misspelled words in texts. Then they were trained to auto-complete sentences. And now, because they’ve processed so much text data, they can have a conversation with you” , he said, referring to chatbots.
The capabilities of the LLMs were expanded from there. Now systems are able to provide responses not only to text but also to images.
“But keep in mind that these systems are still very narrow when you compare them to someone’s work. LLMs cannot understand the human meaning of texts and images. They cannot creatively use texts and images as humans can,” Goh said.
Some readers’ minds might now wander to the “art” of AI – algorithms like DALL-E 2 that generate images based on inputted texts.
But is it art? Is this proof that machines can create? It’s open to philosophical debate, but according to many observers, AI does not create art but merely imitates it.
To quote Ludwig Wittgenstein wrongly, “my words make sense, your AIs don’t”.
Edited by: Carla Bleiker