Facebook’s management obstructed the architect of its recommendation algorithms from mitigating their negative social impact, MIT Technology Review reported.

What’s new: The social network focused on reining in algorithmic bias against particular groups of users at the expense of efforts to reduce disinformation and hate speech, according to an in-depth profile of Joaquin Quiñonero Candela, who designed Facebook’s early recommenders and now leads its Responsible AI team.

The story: The article traces Quiñonero’s effort to balance the team’s mission to build trustworthy technology with management’s overriding priorities: boosting user engagement and avoiding accusations that it favored one political faction over another.

  • Quiñonero joined Facebook in 2012 to lead an effort to build models that matched advertisements with receptive users. That effort successfully boosted user engagement with ads, so he designed similar systems to fill news feeds with highly engaging posts, comments, and groups.
  • His team went on to build a machine learning development platform, FBLearner Flow, that was instrumental to helping Facebook scale up its AI efforts. It enabled the company to build, deploy, and monitor over a million models that optimize engagement through tasks like image recognition and content moderation, inadvertently amplifying disinformation and hate speech.
  • In 2018, Quiñonero took charge of Responsible AI to investigate and resolve such issues. The team developed models that attenuated the flow of disinformation and hate speech, but they diminished engagement, and management redirected and disincentivized that work.
  • Facebook’s leadership, under pressure from critics who charged that the network favored left-wing over right-wing political views, directed the team to focus on mitigating bias. The new direction diverted attention away from staunching extremist content and toward tools like Fairness Flow, which measures models’ relative accuracy when analyzing data from different user demographics.

The response: Facebook denied that it interfered with moves to reduce disinformation and hate speech. It also denied that politics motivated its focus on mitigating bias.

  • Facebook head of AI research Yann LeCun said the article mischaracterized how Facebook ranks content, Quiñonero’s role, and how his group operates.
  • The article made little mention of the company’s efforts to reduce the spread of divisive content, detect hateful memes, and remove hate speech. AI flagged around 95 percent of the hate speech removed from the network between last July and September, according to the company.
  • Facebook publicly supports regulations that would govern social media including rules that would limit the spread of disinformation.

Why it matters: Facebook, like many AI companies, is struggling to balance business priorities with its social impact. Teams like Responsible AI are crucial to achieving that balance, and business leaders need to give them authority to set technical priorities and limits.

We’re thinking: The powers of AI can put machine learning engineers in the difficult position of mediating between business priorities and ethical imperatives. We urge business leaders to empower employees who try to do the responsible thing rather than throttling their work, even if it negatively impacts the bottom line.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox