Society awakened to the delight, threat, and sheer weirdness of realistic images and other media dreamed up by computers.

What happened: So-called deepfakes became both more convincing and easier to make, stoking a surge of fascination and anxiety that shows every sign of intensifying in the coming year.

Driving the story: Two years ago, the majority of deepfakes were pixelated and difficult to make. Now they’re slicker than ever and improving at a quick clip.

  • Late 2018 brought stand-out models like BigGAN, which creates images of the classes found in ImageNet, and StyleGAN, which generates variations such as poses, hairstyles, and clothing. In early 2019, researchers also developed a network that makes realistic talking-head models from a single photo, raising the question of whether people actually said the things you watched them say.
  • The technology found positive uses such as making English football star David Beckham appear to deliver an anti-malaria message in nine languages. Chinese tech giant Momo released Zao, an app that maps users’ faces onto characters in scenes from popular movies.
  • Yet deepfakes also showed their dark side. Scammers bilked a UK energy company of hundreds of thousands of dollars using fake audio of the CEO’s voice. The technology was implicated in political scandals in Malaysia and Gabon.
  • A report by Deeptrace Labs, which sells deepfake detection software, found that 96 percent of deepfake videos online were non-consensual porn — mostly faces of female celebrities rendered on computer-generated naked bodies.

The reaction: Facebook, beset by a fake video of CEO Mark Zuckerberg appearing to gloat at his power over the social network’s members, announced a $10 million contest to automate deepfake detection. Meanwhile, China enacted restrictions on spreading falsified media. In the U.S., the state of California passed a similar law, while the House of Representatives considers national anti-deepfake legislation.

Where things stand: Detecting and controlling deepfakes is shaping up to be a high-tech game of cat and mouse. Although today’s fakes bear telling features, they’ll be indistinguishable from real images within a year, according to USC computer science professor Hao Li.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox