Digital Services Act: Is Article 14(4) Protecting Free Speech or Just Window Dressing?
Major social media platforms wield significant power over what we see and say online. Their content moderation policies—often shaped behind closed doors—have a direct bearing on our freedom of expression. In the European Union (EU), the Digital Services Act (DSA) seeks to shine a brighter light on these private content moderation decisions and anchor them in a framework that respects fundamental rights.
Within this ambitious legislative text lies Article 14(4) - a provision meant to ensure that platforms’ enforcement of their terms and conditions respects users’ freedom of expression. But does Article 14(4) live up to its promise, or is it a “paper tiger” offering little real protection?
In this blog post, we draw on recent debates surrounding Article 14(4) of the Digital Services Act to explore whether this provision can meaningfully shield legitimate speech online or if it simply pays lip service to user rights.
1. The Rising Importance of Content Moderation
From Facebook and X (formerly Twitter) to TikTok, which recently came under the spotlight with regard to its potential to upend election processes, platforms today operate as quasi-governors of speech.” They set the rules (through their Terms and Conditions) and decide which comments or posts to remove, demote, or shadow ban. As a result, private companies—rather than courts or public bodies—routinely determine whether certain content stays up or gets taken down. This is particularly significant for the moderation of "lawful but awful" content that platforms may restrict but is not illegal per se - a key area of tension for freedom of expression.
Why It Matters: Freedom of expression is not absolute; it must often be balanced with competing concerns like preventing hate speech, disinformation, or a company’s freedom to conduct business. But when corporate policies become the default “law,” there is always a risk that legal speech gets overly restricted.
2. What Is Article 14(4)?
The Digital Services Act is the EU’s new, horizontal legislation governing online intermediaries and platforms. Article 14(4) specifically requires that platforms apply and enforce their Terms and Conditions “with due regard to the rights and legitimate interests of all parties involved, including the applicable fundamental rights of the recipients.”
A New Normative Baseline: Before the DSA, platforms had considerable freedom in policing speech. Article 14(4) aims to curb that autonomy by mandating explicit consideration of fundamental rights—above all, freedom of expression—in every content moderation decision.
Key Question: Does Article 14(4) grant users a direct right to invoke freedom of expression if their posts are removed? Or is it merely a symbolic reference to the EU Charter of Fundamental Rights that remains vague and toothless in practice?
3. Strengths & Potential Promise
a. Embedding Fundamental Rights
By insisting on “due regard” for user rights, the DSA underscores that private enforcement cannot ignore constitutional freedoms. This idea nudges platforms toward more transparent, accountable, and proportionate moderation.
Takeaway: Even if it does not give a straightforward, direct claim to users, this insertion of rights-based language can shape future litigation and how courts interpret platforms’ obligations.
b. Procedural Protections
The DSA’s procedural safeguards (Articles 17, 20, and 21) bolster Article 14(4) making its broad mandate operational. For instance:
Article 17 requires platforms to provide a meaningful “statement of reasons” when removing or restricting user posts.
Article 20 sets up an internal complaint-handling system, giving users a faster “second look” option before they resort to external remedies.
Article 21 outlines an out-of-court dispute settlement mechanism, allowing users to challenge platforms’ decisions without immediately heading to court.
Takeaway: If effectively implemented, these processes can reinforce the principle that content moderation must be balanced, justified, and open to challenge.
c. Systemic Risk Assessment
Articles 34 and 35 introduce systemic risk analysis: platforms must identify how their policies (including T&Cs) impact fundamental rights (like free expression) and propose “reasonable and proportionate” mitigation strategies.
Takeaway: This acknowledges that content moderation, especially at scale, has sweeping implications for democratic debate and fundamental rights. If done right, risk assessments could push platforms to proactively protect—rather than simply restrict—lawful speech.
4. Challenges & Criticisms
a. Vague Language, Uncertain Enforcement
Article 14(4)’s “due regard” is a broad term that gives little concrete guidance. Platforms may comply superficially - adding a rote reference to rights - without altering their moderation policies in meaningful ways.
Courts and regulators may also differ in how they interpret “due regard,” causing inconsistent outcomes across EU Member States.
b. Heavy Reliance on User Complaints
The DSA’s emphasis on internal complaint systems and out-of-court settlement procedures places the onus on users to defend their speech. Many lack the time, resources, or legal know-how to fight wrongful takedowns.
Risk: Over time, platforms could adopt conservative, risk-averse moderation policies -potentially leading to over-removal of borderline lawful (but “controversial”) content.
c. Potential for Overreach
The European Commission’s expanded role in overseeing content moderation practices - especially under the systemic risk framework - triggers worry that the EU executive might push platforms to adopt stricter speech controls in the name of mitigating “risks.”
This could tip the balance away from freedom of expression if not carefully monitored.
d. Unclear Auditing Standards
Audits under Article 37 are meant to keep platforms accountable for how they manage risk. Yet there is no standardized checklist to evaluate how well fundamental rights are protected.
Risk: Audits could devolve into box-ticking exercises or rely overly on the platform’s self-reported data, missing larger structural problems in content moderation systems.
5. Conclusion: Is Article 14(4) a Game-Changer—or Just PR?
On paper, Article 14(4) is groundbreaking. It cements fundamental rights considerations into the legal bedrock of online content governance. Coupled with the DSA’s procedural tools and risk assessment framework, it signals a new era where tech giants must justify moderation decisions more transparently and safeguard freedom of expression.
Still, real-world impact hinges on enforcement and interpretation:
Will platforms meaningfully integrate freedom of expression checks? Or will they only pay lip service to human rights?
Will courts and regulators press for robust user protections? Or will they accept minimal compliance measures as good enough?
Early signs suggest a mixed picture. The DSA’s layered approach—procedural safeguards for user redress, systemic risk assessments, and fundamental rights references—is commendable. Yet many details remain unsettled, and the inherent tension between “lawful but awful” content and platform business interests still lingers.
Our Take:
Article 14(4) is neither a mere façade nor an ironclad guarantee. It is best viewed as a catalyst—a provision that, properly enforced and interpreted, can move platforms toward greater respect for free expression. Its vagueness reflects the inherent complexity of regulating speech, especially with regard to sensitive content where competing rights must be balanced.
How We Can Help
We offer practical advice at the intersection of the DSA and the AI Act. Whether you’re integrating AI-powered moderation tools or navigating the DSA’s systemic risk assessments, our team helps you comply with both regimes—balancing innovation with fundamental rights.