Most U.S. state agencies use AI without limits or oversight. An investigative report probed reasons why efforts to rein them in have made little headway.

What’s new: Since 2018, nearly every proposed bill aimed at studying or controlling how state agencies use automated decision systems, or ADS, has failed to be enacted, according to The Markup, a nonprofit investigative tech-journalism site. Insiders blame big tech.

Why it hasn’t happened: Reporters interviewed lawmakers and lobbyists about dozens of stalled bills. They found that bureaucracy and lobbying have played major roles in blocking legislation.

  • Bureaucratic roadblocks: Lawmakers reported difficulty finding out from government agencies which AI tools they were using and how. This is partly due to the agencies’ lack of cooperation and partly because the lawmakers don’t understand the technology well enough to probe the full range of potential uses.
  • Industry resistance: Tech companies and their lobbyists have stymied passage of bills by arguing that their provisions are overly broad and would impact non-AI systems like traffic light cameras, DNA tests, and gunshot analysis. In California, an alliance of 26 tech groups derailed a bill that would have asked contractors to submit an impact report when making a bid. They argued that the legislation would limit participation, discourage innovation, and cost taxpayers.

Behind the news: Although U.S. states are mostly free to use AI, several of them impose limits on private companies.

  • Last year, New York City passed a law that requires private employers to audit automated hiring systems for gender and racial bias before putting them to use. The law, which goes into effect in 2023, also requires employers to notify candidates when they automate hiring and to offer an alternative.
  • A Colorado law set to take effect in 2023 will ban insurance companies from using algorithms that discriminate against potential customers based on factors including age, race, and religion. The law also establishes a framework for evaluating whether an insurance algorithm is biased.
  • Last year, Illinois required Facebook to pay $650 million to state residents. A 2008 law limits how companies can obtain and use personal information; in this case, image data used by Facebook’s now-defunct face recognition feature.

Why it matters: China, the European Union, and the United Kingdom have announced laws designed to rein in AI’s influence in business, society, and other domains. The lack of such limits in the U.S. make it an outlier. On one hand, this leaves the authorities free to experiment and perhaps discover productive use cases. On the other, it invites abuse — or simply lack of quality control over a technology that has great potential for both good and ill.

We’re thinking: Regulation done badly is a drag on progress. Done right, though, it can prevent harm, level the playing field for innovators, and ensure that benefits are widespread. The AI community should push back against special interests — even when we would profit — that stymie regulation that would be good for society.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox