Published
Reading time
2 min read
Charts showing the impact of Instagram in different topics

Scrutiny of Facebook intensified after a whistleblower leaked internal research showing the company has known that its ongoing drive to engage users has harmed individuals and society at large.

What’s new: Former Facebook product manager Frances Haugen, in appearances on television and before the U.S. Congress, described how the company’s algorithms reward divisiveness, damage some users’ mental health, and allow prominent members to skirt its rules.

Whistle blown: Haugen, who worked on a team that aimed to combat expressions of hate, violence, and misinformation, revealed her identity this week after passing Facebook documents to The Wall Street Journal and the U.S. Securities and Exchange Commission (SEC), which oversees public companies. The revelations prompted a Senate hearing in which legislators questioned Facebook’s global head of safety and called for regulating the company. The documents revealed that:

  • Media companies and political organizations prioritized sharing of divisive and inflammatory content after Facebook in 2018 revised its recommendation algorithm to promote interaction among families and friends. They told the company they didn’t wish to promote such content but feared that they wouldn’t reach users otherwise. Similarly, anti-vaccination activists gamed the system, undermining CEO Mark Zuckerberg’s own goal of promoting awareness of Covid vaccines.
  • Facebook subsidiary Instagram found that its app exacerbated feelings of inadequacy and depression in young people. Of teenage girls who used the app, 32 percent reported feeling worse about their bodies afterward, and 6 percent of U.S. teen users said the app caused them to consider suicide. In the wake of the revelations, Instagram suspended plans for a service tailored to kids.
  • Facebook exempts millions of so-called VIP users from its rules that prohibit posts that contain disinformation, calls for violence, and information that its fact-checkers deem to be false.

Facebook’s response: The company said that press coverage of the documents had minimized its successes at blocking harmful content, pointing out that vaccine hesitancy among Facebook users declined by 50 percent since January. Instagram said that building a service for kids is “the right thing to do,” especially since many younger users lie about their age to gain access, which is limited to those 13 and older. Nonetheless, it has paused plans to build such a service while it works to persuade parents and policymakers that it’s a good idea.
Behind the news: Facebook has aimed to counter adverse effects of its recommendation algorithms with ever more sophisticated content-moderation algorithms. It has developed AI systems to detect hate speech, harmful memes, and misinformation. Yet it hasn’t addressed longstanding complaints that it torpedoes any program that has a negative impact on user engagement — including the unit Haugen worked for, which the company dissolved after the 2020 election.

Why it matters: Algorithms that optimize engagement are a key driver of profit for social networks — yet, as the leaked documents show, they can have severe consequences. The resulting harms undermine public trust in AI, and they build support for laws that would limit social media platforms and possibly recommendation algorithms in general.

We’re thinking: Facebook has been under fire for years. Despite the company’s testimony in several congressional hearings, apologies, and promises to do better, little has changed. An investigation by the SEC could break the logjam. Meanwhile, if you work in AI, we urge you to consider whether your employment, net-net, improves society and, if not, begin the transition into a situation that does.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox