Published
Reading time
2 min read
Forbidden sign over different potentially dangerous falsehood symbols

Facing a tsunami of user-generated disinformation, YouTube is scrambling to stop its recommendation algorithm from promoting videos that spread potentially dangerous falsehoods.

What’s new: The streaming giant developed a classifier to spot conspiracy theories, medical misinformation, and other content that may cause public harm. Wired detailed the effort.

How it works: In a bid to boost total viewership to one billion hours each day, YouTube years ago tweaked its recommendation algorithm to favor videos that racked up high engagement metrics such as long watch times and lots of comments. Those changes inadvertently rewarded videos that express extreme, inflammatory, and often misleading perspectives. Since then, the company has largely automated recognition and deletion of videos that promote violence or are deemed pornographic, which violate its rules. But potentially harmful clips that don’t break those rules, like conspiracy theories, posed a tougher challenge.

  • Reviewers watched videos and answered a questionnaire that asked whether they contained various types of offensive or borderline content including conspiracy theories, urban legends, or contradictions to scientific consensus. Doctors reviewed the facts in videos with medical content.
  • YouTube’s engineers turned the answers into labels, and used the dataset to train a classifier. The model learned to recognize problematic clips based on features including titles, comments, and videos viewed before or after.
  • Given a new video, the classifier assigns a score that represents how extreme it is. The recommendation algorithm then adds this score to other weights when deciding whether to include the video in a given user’s queue.
  • The system reduced overall watch time of conspiracy videos and similar content by 70 percent last year, the company said.

Behind the news: YouTube’s recommendation algorithm has a problematic history.

  • In 2019, researchers found that it recommended videos of children wearing swimsuits to users who had just viewed sexually suggestive content about adults.
  • Last September, a trio of YouTubers demonstrated that the company’s system wouldn’t sell advertising in non-explicit videos with words like gay or lesbian in the title, thus depriving their creators of revenue.
  • After the U.S. Justice Department released the findings of its investigation into President Trump’s alleged collusion with Russia during his election campaign, former YouTube engineer Guillaume Chaslot found that the site’s recommendation algorithm favored videos about the investigation from Russia Today, a news site funded by the Russian government.

Why it matters: YouTube is the world’s biggest video streaming service by far, and the titles it recommends inform — or misinform — millions of people.

We’re thinking: There’s danger in any company taking on the role of arbiter of truth and social benefit, but that doesn’t mean it shouldn’t moderate the content it delivers. As the world faces multiple crises from Covid-19 to climate change, it’s more important than ever for major internet companies to stanch the flow of bad information.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox