Artificial selection

The field of artificial intelligence is infected with hype. Here’s how not to be fooled

The star of the show Tesla’s annual “AI Day” (for “artificial intelligence”) on September 30 was a humanoid robot introduced by Tesla CEO Elon Musk as “Optimus.”

The the robot could walk, carefully, and perform a few repetitive mechanical tasks like waving your arms and waving a watering can over the planters. The demo was enthusiastically received by the hundreds of engineers in the audience, many of whom hoped to land a job at Tesla.

“It means a future of abundance,” Musk proclaimed from the stage. “A future where there is no poverty… This is truly a fundamental transformation of civilization as we know it.”

We still don’t have a learning paradigm that allows machines to learn how the world works, like humans and many non-human babies do.

Yann LeCun, AI researcher

Robotics experts watching from a distance were less impressed. “Not mind-blowing” was the sober judgment of Christian Hubicki of Florida State University.

Some artificial intelligence experts were even less charitable. “The event was a complete mess,” Ben Shneiderman of the University of Maryland told me. Among other shortcomings, Musk failed to articulate a cohesive use case for the robot — i.e., what would it do?

For Shneiderman and others in the AI ​​field, the Tesla demo embodied some of the worst qualities of the AI ​​hype; its reduction to humanoid characters, its exorbitant promises, its promotion by interested entrepreneurs, and its suggestion that AI systems or devices can operate autonomously, without human assistance, to achieve results beyond human capabilities.

“When news articles indiscriminately repeat public relations statements, misuse images of robots, attribute agency to AI tools, or downplay their limitations, they mislead and misinform readers about the potential and limitations of AI,” Sayash Kapoor and Arvind Narayanan wrote in a checklist of AI reporting pitfalls went live the same day as the Tesla demo.

“When we talk about AI,” Kapoor says, “we tend to say things like ‘AI does X – artificial intelligence grades your homework’, for example. We’re not talking about any other technology of this way – we don’t say ‘the truck is driving down the road’ or ‘a telescope is looking at a star.’ It’s enlightening to wonder why we view AI as different from other tools. only one tool among others to perform a task.

This is not how AI is typically depicted in the media or, indeed, in announcements by researchers and companies engaged in the field. There, systems are depicted as having learned to read, grade homework, or diagnose disease at least as well, if not better, than humans.

Kapoor thinks the reason some researchers may try to hide human ingenuity behind their AI systems is that it’s easier to attract investors and publicity with claims of AI breakthroughs – from same way that “dot-com” was a marketing draw throughout the year. 2000 or “crypto” is today.

What is generally omitted in many AI reports is that the machines’ successes only apply in limited cases, or that the evidence for their achievements is dubious. A few years ago, the education world was rocked by a study claiming to show that the machine and human ratings of a selection of student essays were similar.

The complaint was challenged by scholars who questioned his methodology and results, but not before headlines appeared in national newspapers such as: “Essay grading software gives professors a break.” One of the study’s main reviewers, Les Perelman of MIT, later built a system he called the BS Basic Automated Testing Language Generator, or Babel, with which he demonstrated that the Machine scoring couldn’t tell the difference between gibberish and persuasive writing.

“The Emperor has no clothes,” Perelman told the Chronicle of Higher Education at the time. “OK, maybe in 200 years the Emperor will have clothes… But right now the Emperor doesn’t.”

A more recent claim was that AI systems “could be as good as medical specialists at diagnosing disease”, as a CNN article claimed in 2019. The diagnostic system in question, according to the article, employed “algorithms, big data, and computing power to mimic human intelligence.”

These are buzzwords that gave the false impression that the system “imitated human intelligence”, Kapoor observed. The article also failed to specify that the purported success of the AI ​​system has only been observed in a very narrow range of diseases.

The AI ​​hype is not only a danger to laymen’s understanding of the field, but presents the danger of undermining the field itself. One of the keys to human-computer interaction is trust, but if people start to see that a domain has over-promised and hasn’t delivered, the road to public acceptance will only get shorter. lengthen.

The oversimplification of AI achievements conjures up familiar science fiction scenarios: Futurescapes in which machines conquer the worldreducing humans to enslaved drones, or leaving them with nothing to do but laze.

A lingering fear is that AI-powered automation, supposedly cheaper and more efficient than humans, will put millions of people out of work. This concern was prompted in part by a 2013 Oxford University paper estimating that “future computerization” jeopardizes 47% of employment in the United States.

Shneiderman dismissed that prediction in his book “Human Centered AI,” published in January. “Automation is eliminating some jobs, as it has…since at least the days when Gutenberg’s printing presses put scribes out of work,” he writes. “However, automation generally reduces costs and increases quality… Expanded production, wider distribution channels and new products lead to increased employment.”

Technological innovations can make older professions obsolete, according to a 2020 MIT report on the future of workbut also “bringing new professions to life, generating demand for new forms of expertise, and creating rewarding work opportunities”.

A common feature of AI hype is drawing a straight line between an existing achievement and a limitless future in which all problems on the path to further advancement are magically resolved, and thus success to achieve “human-level AI” is “just around the corner.”

Yet “we still don’t have a learning paradigm that allows machines to learn how the world works, as humans and many non-human babies do,” Yann LeCun, chief AI scientist at Meta Platforms (formerly Facebook) and computer science professor. science at NYU, seen recently on Facebook. “The solution is not just around the corner. We have a number of obstacles to remove, and we don’t know how.”

So how can readers and consumers avoid being duped by the AI ​​hype?

Beware of the “sleight of hand trick that asks readers to believe that something that takes the form of a human artifact is equivalent to that artifact”, advises Emily Bender, a computational linguistics expert at the University of Washington. This includes claims that AI systems have written non-fiction, composed software, or produced sophisticated legal documents.

The system may have reproduced these forms, but does not have access to the multitude of facts necessary for non-fiction or the specifications that make a legally valid piece of software or document work.

Among the 18 pitfalls of AI reports cited by Kapoor and Narayanan are the anthropomorphization of AI tools through images of humanoid robots (including, unfortunately, the illustration accompanying this article) and descriptions that use human intellectual qualities such as “learning” or “seeing” – these are usually simulations of human behavior, far from reality.

Readers should beware of expressions such as “AI magicor references to “superhuman” qualities, which “implies that an AI tool does something remarkable,” they write. “It hides how mundane the tasks are.”

Shneiderman advises journalists and editors take care to “clarify human initiative and control…Instead of suggesting that computers take actions on their own initiative, clarify that humans program computers to take those actions”.

It is also important to be aware of the source of any exaggerated claims for AI. “When an article contains only or mostly quotes from company spokespersons or researchers who have built an AI tool,” Kapoor and Narayanan advise, “it is likely to be overly optimistic about the benefits potential of the tool”.

The best defense is healthy skepticism. Artificial intelligence has advanced over the past few decades, but it is still in its infancy, and claims for its applications in the modern world, let alone in the future, are inevitably incomplete.

In other words, no one knows where the AI ​​is heading. It’s theoretically possible that, as Musk asserted, humanoid robots could eventually bring about “a fundamental transformation of civilization as we know it.” But no one really knows when or if this utopia will happen. Until then, the road will be pockmarked with hype.

As Bender advised readers of a particularly breathless article about a supposed advance in AI: “Resist the urge to be impressed.”

This story originally appeared in Los Angeles Times.