Smaller Models, Bigger Biases Compressed face recognition models have stronger bias.

Published
Reading time
2 min read
Image recognition examples

Compression methods like parameter pruning and quantization can shrink neural networks for use in devices like smartphones with little impact on accuracy — but they also exacerbate a network’s bias. Do compressed models perform less well for underrepresented groups of people? Yes, according to new research.

What’s new: A Google team led by Sara Hooker and Nyalleng Moorosi explored the impact of compression on image recognition models’ ability to perform accurately across various human groups. The authors also proposed a way to rank individual examples by how difficult they are to classify.

Key insight: In earlier work, members of the team showed that compressed image recognition models, although they maintained their accuracy overall, had trouble identifying classes that were rare in their training data. To learn whether that shortcoming translates into bias against underrepresented human types, the researchers trained models to recognize a particular class (people with blond hair), compressed them, and measured the differences in their accuracy across different types of people. This enabled them to evaluate the difference in performance between compressed and uncompressed models with respect to underrepresented groups.

How it works: The authors trained a set of ResNet-18s on CelebA, a dataset of celebrity faces, to classify photos of people with blond hair. (CelebA is notorious for producing biased models.) Then they compressed the models using various combinations of pruning and quantization.

  • Using both compressed and uncompressed models, they predicted blonde/not-blonde labels for the CelebA test set. They compared the performance of uncompressed and compressed models in classifying pictures of young people, old people, men, women, young men, old men, young women, and old women. This gave them a measure of how compression affected model bias against these groups.
  • To rank examples for how difficult they were to classify, the authors found the difference between the number of “blond” predictions by uncompressed and compressed models for a given example, and added that to the difference between the number of “not blond” predictions by the same models. The sum yielded a score of how consistently the models labeled a given example.
  • To make it easier to study various combinations of image and model, the researchers used a variable threshold to identify the least consistently labeled examples by percentage (designated “CIE” in the gallery above.)

Results: Pruning 95 percent of model parameters boosted the false-positive “blond” rate for women (who made up 14 percent of the dataset) by an average 6.32 percent, but it increased that rate for men (less than 1 percent of the dataset) by 49.54 percent. (The authors didn’t report corresponding results for models compressed by quantization.) Furthermore, the ranking method succeeded in identifying the examples that were most difficult to classify. A 95-percent pruned model was 93.39 percent accurate over the entire dataset, but 43.43 percent accurate on the 1 percent least consistently labeled examples. An unpruned model had much the same trouble. It was 94.76 percent accurate over the entire dataset, but 55.35 percent accurate on the 1 percent of least consistently labeled examples.

Why it matters: Model compression is an important part of practical deployments: Shipping a 10MB neural network for a mobile device is much more acceptable than shipping a 100MB model. But if compression exacerbates biases, we must systematically audit and address those issues.

We’re thinking: This work is a reminder that it’s not enough to optimize overall classification accuracy. We need to make sure our models also perform well on various slices of the data.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox