Is Ethical AI an Oxymoron? Survey finds tech pros are pessimistic about ethical AI.

Published
Reading time
2 min read
Animated neural networks metaphorically AI taking over a group of people

Many people both outside and inside the tech industry believe that AI will serve mostly to boost profits and monitor people — without regard for negative consequences.

What’s new: A survey by Pew Research Center and Elon University asked 602 software developers, business leaders, policymakers, researchers, and activists: “By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?” 68 percent said no.

What they found: Respondents provided a brief written explanation of their thoughts. Some of the more interesting responses came from the pessimists:

  • Ethical principles need to be backed up by engineering, wrote Ben Shneiderman, computer scientist at the University of Maryland. For example, AI systems could come with data recorders, like the black boxes used in aviation, for forensic specialists to examine in the event of a mishap.
  • Because most applications are developed by private corporations, Gary Bolles of Singularity University argued, ensuring that AI benefits humankind will require restructuring the financial system to remove incentives that encourage companies to ignore ethical considerations.
  • The ethical outcomes of most systems will be too indirect to manage, according to futurist Jamais Cascio. For instance, stock-trading algorithms can’t be designed to mitigate the social impacts of buying shares in a certain company.
  • Many respondents point out the lack of consensus regarding values to be upheld by ethical AI. For instance, should AI seek to maximize human agency or to mitigate human error?

Yes, but: Some respondents expressed a rosier view. Michael Wollowski, a professor of computer science at Rose-Hulman Institute of Technology, said, “Since the big tech companies (except for Facebook) by and large want to do good (well, their employees by and large want to work for companies that do good), they will develop their systems in a way that they abide by ethical codes. I very much doubt that the big tech companies are interested (or are able to find young guns [who are interested]) in maintaining an unethical version of their systems.”

Behind the news: Many efforts to establish ethical AI guidelines are underway. The U.S. military adopted its own code early last year, and the EU passed guidelines in 2019. The UN is considering rules as well. In the private sector, major companies including Microsoft and Google have implemented their own guidelines (although the latter’s reputation has been tarnished by the departure of several high-profile ethics researchers.)

Why it matters: Those who fund, develop, and deploy AI also shape its role in society. Understanding their ideas can help ensure that this technology makes things better, not worse.

We’re thinking: Often we aren’t the ones who decide how technology will be used, but we can decide what we will and won’t build. If you’re asked to work on a system that seems likely to have a negative social impact, please speak up and consider walking away.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox