On a recent visit to the Museum of the Moving Image in Astoria, I strolled directly from a 21st-century Queens sidewalk into a living room straight out of the 1960s, outfitted with patterned wallpaper, two armchairs and a large console TV. Onscreen, the venerable CBS anchor Walter Cronkite says solemnly, “If all goes well, Apollo 11 astronauts are going to lift off from pad 39A out there . . . on the voyage man always has dreamed about. Next stop for them: the moon.”

It’s typical news coverage; we see excited crowds, astronauts waving, 3-2-1 blastoff. But then, something happens. There’s a flurry of quick cuts, the kind we’re used to seeing in dramatic movies. There’s static. And then a special report breaks in: President Richard Nixon is sitting at his desk in front of an American flag.

“The fates have ordained that the men who went to the moon to explore in peace - will stay on the moon to rest in peace,” Nixon says.

Or so it seems. The living-room set is part of an elaborate presentation housing an unsettling short film by Francesca Panetta and Halsey Burgund, positing an alternate history of the Apollo 11 moon mission. In Event of Moon Disaster, produced by the MIT Center for Advanced Virtuality, won an Emmy last year.

The film now serves as the centerpiece of Deepfake: Unstable Evidence on Screen, a new exhibit at the Museum of the Moving Image that explores and contextualizes deepfakes: synthetic media videos in which a real-life person appears to say or do something they haven’t actually said or done, fashioned with the use of artificial intelligence.

It really does seem to be Nixon in the film the little shakes and nods of his head, the voice except, of course, it isn’t. It’s a simulation created from archival video, merged with an actor. (You can see how they did it here.)

“We use Nixon's resignation speech as the original video that then gets manipulated,” said co-director Panetta. “The emotion in Nixon's face, all of the original body language, the page turning: all of that really is real. But we have overlaid it, manipulated it, with another very emotional speech” specifically, an actual script written for the president by William Safire, in case the astronauts died before they could return to Earth.

An image of a museum corridor lined with flat video monitors

Entrance to the new exhibition "Deepfake: Unstable Evidence on Screen" at Museum of the Moving Image.

arrow
Entrance to the new exhibition "Deepfake: Unstable Evidence on Screen" at Museum of the Moving Image.
Thanassi Karageorgiou / Museum of the Moving Image

Moon Disaster is a perfect example of a deepfake: You can’t tell by watching that what you’re seeing isn’t real. Co-curator Barbara Miller, deputy director for curatorial affairs at the museum, said that’s why they built an exhibit around it.

“If something like this is possible, then all history seems to be up for grabs so, you know, it’s a warning,” she said. Even so, she added, “we want people to understand that it’s really about questions of application, about context, about use, about intention.”

“The technology itself is agnostic,” Moon Disaster co-director Burgund said. In displaying it, the curators and directors share a dual mission: to show people what’s possible, so they’re not fooled; and to give them the tools to spot deepfakes.

How can you spot a deepfake?

And for the moment, they can be spotted. Most deepfake creators don’t use the painstaking method the Moon Disaster directors used. Instead, they rely on inexpensive, widely available software.

During my visit, Miller and her co-curator, Joshua Glick, a professor at Hendrix College in Arkansas and an MIT fellow, pointed to two rows of screens that face each other in one gallery. One video appears to show the late Beatles singer John Lennon saying he hates music, but loves podcasts.

“There's a kind of sheen or shine to the cheeks and forehead, the sort of dark area between the chin and neck,” Glick said. “There's a little bit of blur . . . and a little bit of jitteriness, as well.” Often, he noted, the subject doesn’t blink, and the teeth don’t look quite right.

The deepfakes in this gallery are just for joy or humor, created by satirists and fans. In one fanciful video from the United Kingdom’s Channel 4, Queen Elizabeth seems to go from delivering a Christmas message to busting out some dance moves.

In another, the Flemish Socialist Party of Belgium created a video lampooning President Donald Trump’s decision to withdraw from the Paris Climate Agreement.

Are deepfakes all bad?

In fact, there are plenty of positive and even serious uses for this kind of work. David France, the director of the documentary Welcome to Chechnya, used synthetic media to disguise the identities of activists fighting LGBTQ persecution. The app DeepNostalgia allows families to create animated videos of loved ones from photographs. One activist group even used artificial intelligence to digitally resurrect Parkland High School shooting victim Joaquin Oliver, so he could advocate for gun control. “Yo, it’s me, it’s Guac,” he says, looking into the camera. “I’ve been gone for two years, and nothing’s changed, bro.”

But there are bad actors, as well. One of the most nefarious uses of deep fakes is in pornography, where there’s high demand for celebrity faces on other bodies. And although it hasn’t yet come to pass, observers have worried that deep fakes could be used to sway an election.

Right now, deepfakes scare lots of people because they’re new, and the possible negative repercussions seem enormous. In actuality, they’re just the most recent step in the advancement of technology used to manipulate people since the beginning of film. To illustrate this point, the exhibit includes a gallery of screens that show the kinds of manipulation we’re more familiar with: editing, costumes, screens, misdirection.

A museum exhibition space filled with mirrors and flat-screen TV monitors

Part of the "Deepfake" exhibition shows visitors how to spot fake media featuring real people.

arrow
Part of the "Deepfake" exhibition shows visitors how to spot fake media featuring real people.
Thanassi Karageorgiou / Museum of the Moving Image

For example, an 1899 short from Edison Manufacturing Company purports to show the raising of the American flag over Havana during the Spanish-American War. Today, we immediately notice the painted backdrop, the soundstage. We’re not fooled. But when film was a novelty, audiences were less savvy.

Another screen shows Rodney King beaten by police. We see it; we believe it. But the footage is juxtaposed with a police expert giving testimony in a courtroom, describing with words and gestures something completely different: the officers doing a fine job. Believing that misdirection, jurors acquitted all four of assault, and three of using excessive force. But when what the jurors were made to believe in the courtroom conflicted with what people outside the courtroom saw on video, protests rocked Los Angeles.

“I think it gets to this idea of how do we agree or disagree if something's true or not,” Miller said, “which is the moment we’re in now.”

Seeking consensus in a feed full of deepfakes

Such concerns have led to a robust conversation among creators and media observers about ethics. “What can you edit, what can you stage, how do you need to indicate that to people, what does consent mean in this case?” said Moon Disaster co-director Panetta. “I think there is a desire to come up with a rule book really, really fast, because it’s really, really scary. But I also think it will be quite hard to have absolutes in the beginning, because the technology is developing very fast, and you don’t know what all the uses are going to be.”

Swiftly developing technology means our social-media feeds will be flooded more and more with fakes, and, as time goes on, many of the tells will vanish, producing results that will seem more real. But the curators and filmmakers involved in this exhibition say audiences already have the tools to evaluate them–namely, the questions we’d ask of any media: Where does this come from? Who made it? Who published it? And why are they releasing it now?

“We don't want to make people fearful that everything we see is fake,” Burgund said. “We want people to be able to determine the difference between what's real and what's fake."

Deepfake: Unstable Evidence on Screen

On view through May 15, 2022 at the Museum of the Moving Image