Whistleblowers Say TikTok and Meta Put Safety at Risk in Algorithm Race

Whistleblowers Say TikTok and Meta Put Safety at Risk in Algorithm Race

The Chronify

TikTok and Meta are facing fresh scrutiny after whistleblowers told the BBC that both companies made decisions that exposed users to more harmful content while competing to build more engaging recommendation systems. The allegations are featured in the BBC documentary Inside the Rage Machine, which examines how platform design choices amplified outrage, division and abuse as social media firms fought for user attention.

According to the BBC’s reporting, former insiders said internal research showed that anger and outrage drove engagement, yet company leaders still pushed products and policy choices that allowed more harmful material onto user feeds. One Meta engineer said managers wanted more “borderline” harmful content in feeds to compete with TikTok, while a former TikTok employee alleged that some cases involving politicians were prioritized over reports involving harmful content affecting children. These claims have not been independently verified by other outlets, but they add to a wider body of reporting and regulatory concern around recommender systems and child safety.

The BBC documentary also highlights concerns about Instagram Reels, which Meta launched in 2020 to respond to TikTok’s rapid rise. Former Meta researcher Matt Motyl told the BBC that Reels was rolled out without enough safeguards, and internal research shared with the broadcaster found higher rates of bullying, harassment, hate speech, and violence or incitement in Reels comments than elsewhere on Instagram. The Guardian’s television coverage of the documentary separately confirmed Motyl’s participation and described the program as an examination of how profit incentives shaped platform algorithms.

The new allegations land as regulators in the United Kingdom step up pressure on major platforms over harmful and addictive recommendation systems. Reuters reported on March 12 that Ofcom and the Information Commissioner’s Office ordered Meta, TikTok, Snap and YouTube to strengthen protections for children and enforce their minimum age rules more effectively by April 30. The move forms part of broader implementation of the UK’s Online Safety Act, which specifically requires platforms to assess how algorithms expose users to illegal or harmful material and to mitigate those risks.

Ofcom has said algorithms are a major pathway through which children encounter harmful content online. The regulator’s recent guidance and the UK government’s own explainer on the Online Safety Act both make clear that platforms must consider how recommendation systems, feed ranking and other design features shape exposure to harm. This regulatory backdrop gives added weight to the whistleblower claims, even though the companies have not publicly accepted the specific accusations aired in the BBC program.

The broader debate over platform safety is not new. Earlier whistleblower cases and investigative reporting had already raised concerns that major social media companies understood how engagement driven systems could amplify harmful material. The latest claims against TikTok and Meta suggest those concerns remain unresolved, especially as platforms continue to balance user growth, political pressure and child safety under tougher regulatory oversight.

You may like

Elected News

Top Read News

© 2025 Chronify. Chronify is not responsible for the content of external sites.