Artificial active

The Vancouver Art Gallery delves into artificial intelligence with The Imitation Game

Artificial intelligence is often taken for granted these days. Whenever you Google something or watch recommended videos on YouTube, your muzzle is firmly anchored in the AI ​​hollow. And then there are your brainiac pals Siri and Alexa.

“Hey, Alexa, what’s the best way to learn about artificial intelligence these days?”

While we wait for a response from Amazon’s trusty virtual assistant, may I suggest The Imitation Game: Visual Culture in the Age of Artificial Intelligence. This is a new exhibition at the Vancouver Art Gallery that examines the development of AI, from the 1950s to the present day, through a historical lens.

Co-curated by VAG Senior Curator Bruce Grenville and computer animation and video game pioneer Glenn Entis, the in-depth exhibition took three years to prepare.

“It came out of a conversation we were having about the incredible presence given to deep learning and machine learning in many areas of animation,” Grenville says over the phone from the gallery. “There has been an extraordinary amount of energy and research to think about the uses of artificial intelligence and its impact on animation, but also on video games and how they can be produced.

“And one of our currents of activity here at the gallery is to examine how film and video, architecture, fashion, graphic design, industrial design and urban design are all part of a vast visual culture that we sort of go through and encounter a little differently, maybe, or as an extension of some of the ways that we do with visual art, so this is an opportunity to touch on a subject that we we could look in a broader way and follow how it’s really very present in all of our lives. That was kind of the starting point.”

The imitation game looks at the work of a number of artists, designers and architects. It features two major works by Sougwen Chung and Scott Eaton, as well as works by *airegan, Stafford Beer, BIG, Ben Bogart, Gui Bonsiepe, Muriel Cooper, DeepDream, Stephanie Dinkins, Epic Games, Amber Frid-Jimenez, Neri Oxman, Patrick Pennefather and WETA. The exhibition begins with an interactive introduction inviting visitors to actively identify various areas of cultural production influenced by AI.

Amber Frid-Jimenez, After Ballet Mécanique, 2018, 2-channel video (still).

“It was actually one of the pieces that we produced in conjunction with the Digital Media Center,” Grenville notes, “so we worked with teams of graduate students who actually helped us design this software. It was the start of COVID when we started designing this, we actually wanted touchless interaction. So really, the idea was to say, ‘Okay, this is the very entrance to the exposure, and artificial intelligence, that’s something I’ve heard of but I’m not really sure how it’s used. But maybe I know something about architecture, maybe I know something about video games, maybe I know something about film history.

“So you point to these 10 categories of visual culture – video games, architecture, fashion design, graphic design, industrial design, urban design – so you point to one of them, and you can point to ‘film ‘, then when you hover over it, it opens up to five different examples of what’s in the series, so it could be 2001: A Space OdysseyWhere blade runnerWhere world on a wire.”

After the exhibition’s introduction – which Grenville likens to “opening the door to your curiosity” about artificial intelligence – visitors encounter one of its main categories, Objects of Wonder, which talks about the history of artificial intelligence. ‘AI and the critical advances that technology has made over the years.

“So there are 20 wonder objects,” says Grenville, “that range from 1949 to 2022, and they sort of chart the history of artificial intelligence over that time period, focusing on one specific object. Like [mathematician and philosopher] Norbert Wiener created this cybernetic creature, he called it a “Moth”, in 1949. So there’s a section that looks at this idea of ​​using animals – well, animal-machines – and thinking about cybernetics, this idea of ​​communication as feedback, early thinking about neuroscience and how neuroscience is starting to imagine this idea of ​​a thinking machine.

“So it’s one of the first. Alan Turing [Computing Machinery and Intelligence] text, it is another object of wonder. This idea of ​​an “imitation game” comes from Turing, as does the very idea of ​​computational thinking. So there’s a wide variety of examples, but it’s the 20 wonder objects, down to emotion recognition – we have another interactive thing where people can kind of explore how emotion recognition software emotions has been used today in different contexts. »

Neri Oxman and the Mediated Matter group at MIT, Golden Bee Cube, synthetic apiary II2020, beeswax, acrylic, gold particles, gold powder.

While the stated goal of The imitation game is to investigate the extraordinary uses of artificial intelligence in the production of modern and contemporary visual culture across the globe, the exhibit also delves into the inherent abuses of AI, including racial bias.

“It’s interesting,” Grenville muses, “artificial intelligence is pretty much unregulated. You know, if you think about the regulatory bodies that govern television or radio or any kind of telecommunications, there’s has no equivalent for artificial intelligence, which really does nothing And so what happens is that sometimes with the best intentions, sometimes not with the best intentions, choices are made about how l artificial intelligence is being used.

“And what happens is sometimes the decisions that are made on, like, ‘How do you test this? What does a face look like? How is that face mapped?’, those kinds of things are based both about intentions and biases And so what became very clear even soon after facial recognition software became very prominent is that there were huge questions around living You know, where do these images come from? These are some of the complications of artificial intelligence. Like, if you “I’m training a dataset to recognize a human face, how is- it causes ? What database is it trained on? If it doesn’t accurately identify people of color, why is this happening?”

One of the most intriguing – and potentially terrifying – aspects of the AI ​​universe are “deepfakes”, which use machine learning and artificial intelligence techniques to generate visual and audio content ready to deceive. . Take, for example, the panic that could ensue, especially today, if a bad actor created a perfectly believable deepfake of Vladimir Putin claiming he just sent nuclear missiles into Manhattan.

Epic Games, MetaHuman Creator, 2022, video (excerpt).

This prospect doesn’t irritate Grenville much, however.

“What’s scary about that?” he asks. “Do you think people would make decisions based on someone posting a video that says we’re sending atomic bombs to you? I think a deepfake is one of those things we imagine that she could be something awful, but it’s more, like, creepy monster stuff rather than fact-based.

“I think bias is a much scarier issue than deepfakes,” he adds. “Racial bias in artificial intelligence is a big problem, and unless that’s fixed, we’ve got serious problems. You know, a Putin deepfake, I don’t think that’s going to be a problem.”

The imitation game: Visual Culture in the Age of Artificial Intelligence from March 5 to October 23 at the Vancouver Art Gallery.