Amplified by Design: TikTok's Engagement Logic, Misogynistic Content, and the Limits of Platform Governance
April 2026 · Linnaeus University — Global Challenges in New Media and Management
A critical analysis of how TikTok's algorithmic design structurally amplifies misogynistic content, and why content moderation alone — from self-regulation to the EU Digital Services Act — fails to address the underlying problem.
Abstract
TikTok has, as of 2023, achieved one billion active monthly users, making it one of the most powerful communication infrastructures in the world. Behind its entertainment-based design operates a surveillance machine — a “dual accumulation model” that monetises user attention through advertising and algorithmically promotes whatever content maximises engagement time. This paper argues that the rise of misogynistic content on TikTok is not incidental but structural: the platform’s engagement logic rewards misogyny because of its capacity to generate emotional response, and existing governance frameworks have failed to challenge this dynamic.
Presentation overview
The accompanying presentation summarises the paper’s argument across three core claims:
The platform is the problem
TikTok’s engagement-driven business model algorithmically rewards misogynistic content for its capacity to generate interaction. This is a structural outcome, not a moderation failure. The algorithm is indifferent to whether content is endorsed or condemned — it only measures interaction. Misogynistic content is doubly profitable: it generates both support and outrage, both of which register as engagement.
Governance has failed
Three levels of governance have been tried — and all three focus on content removal rather than the algorithmic logic that makes harmful content valuable to circulate:
Platform self-regulation — Community standards and account bans produce only a “waterbed effect”: banned users migrate to Telegram or BitChute, while the algorithm that amplified them remains untouched.
Advertiser-driven governance — Standards like GARM protect brand safety, not democratic values. They target illegal content only, leaving harmful-but-legal manosphere material within the brand safety floor.
EU Digital Services Act — The most ambitious attempt, but still content-focused. It restricts speech only where “proportionate” and fails to address why harmful content circulates at scale in the first place.
Structural regulation is necessary
Effective governance requires three shifts:
-
Content → algorithmic governance — Mandatory independent auditing of TikTok’s recommendation logic, with enforceable obligations to demonstrate that engagement-maximisation does not structurally advantage harmful content.
-
Market values → human rights — Risk assessments measured against democratic communication rights, not legal thresholds. The relevant question is whether the platform’s design is compatible with democratic human flourishing.
-
Incidental → structural harm — Gender-transformative governance that asks whether platform design systematically excludes women from democratic participation, rather than treating online misogyny as a contingent problem to be moderated away.
Full paper and presentation available for download above.