BBC Investigation Reveals Social Media Platforms Prioritized Engagement Over User Safety
Insiders and whistleblowers from major social media companies, including TikTok and Meta (owner of Facebook and Instagram), have revealed practices that allowed harmful content to spread on users’ feeds, according to an investigation by the BBC.
Sources stated that internal research indicated controversial content boosts engagement, prompting management decisions that permitted the circulation of material ranging from violence and sexual exploitation to terrorist incitement. The goal was to increase the time users spent on the platforms.
A former Meta engineer disclosed that executives instructed teams to allow “borderline harmful content,” including hate speech and conspiracy theories, to compete with TikTok amid concerns over declining stock prices. Meanwhile, a former TikTok employee shared that complaints about content harmful to children were deprioritized in favor of politically sensitive cases, reflecting the platform’s interest in maintaining strong relations with political authorities rather than prioritizing user protection.
Internal Meta documents showed that Instagram Reels, launched in 2020 to compete with TikTok, experienced higher levels of bullying, harassment, hate speech, and violent content than standard posts. Requests for additional safety staff to protect children and ensure election integrity were reportedly denied.
Former TikTok engineers noted that weekly recommendation algorithm updates led to increased exposure to “borderline” content, defined internally as legally permissible yet harmful material, including offensive, racist, sexual, and conspiratorial content. Whistleblowers also criticized inadequate reporting systems, leaving minors exposed to violent and harmful material. One teenager reported experiencing algorithm-driven radicalization starting at age 14.
Both companies responded: Meta denied deliberately manipulating harmful content for financial gain, while TikTok labeled the allegations “false,” emphasizing investments in content moderation and parental control tools.
The BBC investigation highlights the challenge social media platforms face in balancing user engagement with safety, showing that algorithm-driven competition may sometimes place profitability above protection.



