AI’s ability to produce synthetic pictures that fool humans into believing they’re real has spurred a race to build neural networks that can tell the difference. Recent research achieved encouraging results.

What’s new: Sheng-Yu Wang and Oliver Wang teamed up researchers from UC Berkeley and Adobe to demonstrate that a typical discriminator — the component in a particular generative adversarial network (GAN) that judges the output to be real or synthetic — can recognize fakes generated by a variety of image generators.

Key insight: The researchers trained the discriminator on a dataset made up of images created by diverse GANs. Even two training examples from an unrelated generator improved the discriminator’s ability to recognize fake images.
How it works: The researchers compared the performance of ProGAN’s discriminator when trained on Pro-GAN output and on their own dataset.

  • The training set comprised 18,000 real images and 18,000 Pro-GAN images from the 20 object categories in the LSUN dataset, along with augmented versions of those images. The validation set consisted of 100 real and synthetic images per category. The researchers created the Foresynth test dataset that consists of real and synthetic images from 11 GANs.
  • Blur and compression were applied to the training data, though the testing wasn’t performed on augmented images.
  • Augmentation improved performance on the whole, though some GANs evaded detection better than others.

Results: ProGAN’s discriminator distinguished real from fake images 80 percent of the time. Accuracy rose to 82.3 percent by adding two training examples from another generator (and allowing the discriminator to adjust its confidence threshold) and 88.6 percent with many examples. The researchers also compared real images used to train the generators with 2,000 fake images from each one. They found no discernible pattern in a frequency representation of real images and distinctive patterns in the output of all generators. These subtle patterns, they conjecture, enabled the discriminator to generalize to the output of unrelated generators.

Yes, but: The authors’ approach to detecting fake images does a fairly good job of spotting run-of-the-mill GAN output. But a determined malefactor could use only generated images that evaded their method.

Why it matters: Prior research didn’t envision that a single discriminator could learn to recognize fakes from diverse, unrelated generators. Current generators apparently leave common traces — a hopeful prospect for developing more capable fake detectors. Of course, that could change tomorrow.

We’re thinking: Your move, fakers.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox