Fairness East and West Looking at issues of AI bias and fairness in India.

Published
Reading time
2 min read
Scale of justice symbol over a map of India

Western governments and institutions struggling to formulate principles of algorithmic fairness tend to focus on issues like race and gender. A new study of AI in India found a different set of key issues.

What’s new: Researchers at Google interviewed dozens of activists, academic experts, and legal authorities about the ways AI is deployed in India, especially with respect to marginalized groups. In part, their goal was to demonstrate how Western notions of bias and power don’t always apply directly to other cultures.

What they found: The report highlights three ways in which issues surrounding AI in India differ from Western countries and may call for different approaches to achieve fairness.

  • Dataset bias: Half the Indian population lacks access to the internet — especially women and rural residents — so datasets compiled from online sources often exclude large swathes of society. Fixing the problem goes beyond data engineering. It requires a comprehensive approach that includes bringing marginalized communities into the digital realm.
  • Civil rights: Many Indian citizens are subjected to intrusive AI, unwillingly or unwittingly. For example, some cities use AI to track the operational efficiency of sanitation workers, many of whom come from lower-caste groups. To address perceived abuses, Westerners typically appeal to courts, journalists, or activists. Many Indians, though, perceive such avenues to be largely unavailable.
  • Technocracy: India is eager to modernize, which motivates politicians and journalists to embrace AI initiatives uncritically. Compared with Western countries, fewer people in positions of power are qualified to assess such initiatives — a prerequisite to making their fairness a priority.

Behind the news: Other groups have sought to highlight the outsized influence that Western notions of ethics have on AI worldwide.

  • The IEEE Standards Association recently investigated how applying Buddhist, Ubuntu, and Shinto-inspired ethics could improve responsible AI.
  • A 2019 study looked at how responsible AI guidelines should accommodate the massive influx of people who are newly online, many of whom live in countries like Brazil, India, and Nigeria.
  • A report published last year examined the social implications of AI in China and Korea.

Why it matters: Most research into AI fairness comes from a U.S.-centric perspective rooted in laws such as the Civil Rights Act of 1964, which outlaws discrimination based on race, sex, and religion. Guidelines based on a single country’s experience are bound to fall short elsewhere and may even be harmful.

We’re thinking: Many former colonies struggle with legal and educational systems imposed by Western powers. It’s important to avoid repeating similar patterns with AI systems.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox