The European Union proposed sweeping restrictions on AI technologies and applications.

What’s new: The executive arm of the 27-nation EU published draft rules that aim to regulate, and in some cases ban, a range of AI systems. The proposal is the first to advance broad controls on the technology by a major international body.

What it says: The 100-plus page document divides AI systems into three tiers based on their level of risk. The definition of AI includes machine learning approaches, logic-based approaches including expert systems, and statistical methods.

  • The rules would forbid systems deemed to pose an “unacceptable” risk. These include real-time face recognition, algorithms that manipulate people via subliminal cues, and those that evaluate a person’s trustworthiness based on behavior or identity.
  • The “high risk” category includes systems that identify people; control traffic, water supplies and other infrastructure; govern hiring, firing, or doling out essential services; and support law enforcement. Such systems would have to demonstrate proof of safety, be trained using high-quality data, and come with detailed documentation. Chatbots and other generative systems would have to let users know they were interacting with a machine.
  • For lower-risk applications, the proposal calls for voluntary codes of conduct around issues like environmental sustainability, accessibility for the disabled, and diversity among technology developers.
  • Companies that violate the rules could pay fines of up to 6 percent of their annual revenue.

Yes, but: Some business-minded critics said these rules would hinder innovation. Meanwhile, human rights advocates said the draft leaves loopholes for applications that are nominally prohibited. For example, face recognition is prohibited only if it’s conducted in real time; it could still be used on video captured in the past.

Behind the news: Governments worldwide are moving to regulate AI. The U.S. Federal Trade Commission last week signaled its intent to take legal action against companies that make biased systems. A number of other countries including Australia, China, Great Britain, and India have enacted laws aimed at reining in big tech companies.

Why it matters: The EU’s AI proposal is the spiritual successor to its 2018 privacy law, the General Data Protection Regulation (GDPR). That law sparked a global trend as Brazil, China, India, and other countries proposed or enacted laws to protect user data. The new plan could have a similar impact.

We’re thinking: Despite its flaws, the GDPR drew a line in the sand and advanced the conversation about uses of personal data. While this new set of rules is bound to provoke criticism —some it valid, no doubt — we welcome moves to promote regulation around AI and look forward to a spirited, global discussion.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox