Governments Lay Down the Law

  • Share to Facebook
  • Share to Twitter
  • Share to LinkedIn
  • Share via Email

Legislators worldwide wrote new laws — some proposed, some enacted — to rein in societal impacts of automation.
What happened: Authorities at all levels ratcheted up regulatory pressure as AI’s potential impact on privacy, fairness, safety, and international competition became ever more apparent.
Driving the story: AI-related laws tend to reflect the values of the world’s varied political orders, favoring some balance of social equity with individual liberty.

  • The European Union drafted rules that would ban or restrict machine learning applications based on categories of risk. Real-time facial recognition and social credit systems would be forbidden. Systems that control vital infrastructure, aid law enforcement, and identify people based on biometrics would need to come with detailed documentation, demonstrate their safety, and undergo ongoing human supervision. Issued in April, the draft rules must undergo a legislative process including amendments and likely won’t be implemented for at least another 12 months.
  • Beginning next year, China’s internet regulator will enforce laws governing recommendation algorithms and other AI systems that it deems disruptive to social order. Targets include systems that spread disinformation, promote addictive behavior, and harm national security. Companies would have to gain approval before deploying algorithms that might affect public sentiment, and those that defy the rules would face a ban.
  • The U.S. administration proposed an AI Bill of Rights that would protect citizens against systems that infringe on privacy and civil rights. The government will collect public comments on the proposal until January 15. Below the federal level, a number of U.S. cities and states restricted face recognition systems, and New York City passed a law that requires hiring algorithms to be audited for bias.
  • The United Nations’ High Commissioner for Civil Rights called on member states to suspend certain uses of AI including those that infringe on human rights, limit access to essential services, and exploit private data.

Behind the news: The AI community may be approaching a consensus on regulation. A recent survey of 534 machine learning researchers found that 68 percent believed that deployments should put greater emphasis on trustworthiness and reliability. The respondents generally had greater trust in international institutions such as the European Union or United Nations than in national governments.
Where things stand: Outside China, most AI-related regulations are pending approval. But the patchwork of proposals suggests a future in which AI practitioners must adapt their work to a variety of national regimes.

  • Share to Facebook
  • Share to Twitter
  • Share to LinkedIn
  • Share via Email