How Online Disinformation Laws Risk Undermining Democracy
Newsletter Issue 87: Disinformation laws risk weakening democracy when legal and political powers over digital speech undermine free expression, accountability, and open public debate.
Disinformation laws are often drafted with a confidence that is not matched by their democratic legitimacy. Legal systems appear willing to trade precision, restraint, and procedural safeguards for speed and symbolic reassurance, even when the subject matter sits at the core of political expression. This trend deserves sustained criticism, not because false information is harmless, but because law that intervenes too broadly in public discourse alters how democratic power is exercised, delegated, and justified.
Newsletter Issue 87
The Legal Campaign Against Disinformation and Its Democratic Risks
The current generation of laws meant to counter falsehood in the digital world carries a real risk of weakening the very democratic values they claim to defend. It is easy to accept the impulse behind these laws. False information can corrode trust, deepen division, and influence the course of public decision making in ways that are harmful to societies.
Nonetheless, the responses that legal systems are adopting raise questions about freedom of speech, accountability, and the role of government and private technology companies in controlling what citizens are allowed to know and say.
Governments are passing statutes and regulators are enforcing rules with significant consequences. Individuals are being asked to adapt to legal frameworks that touch on fundamental liberties while technology platforms are being pressed into roles that shape political discourse.
The objective of this newsletter is to examine the tension between the well understood harms of online disinformation and the rights that underpin democratic governance.
This newsletter will trace legislative frameworks, highlight relevant legal developments globally, and critically assess how these laws are playing out as real world tools that interact with technology and civil liberties.
The Rise of Disinformation Regulation
In recent years, the response to disinformation has included a mix of statutory mandates and regulatory expectations. In the European Union, the Digital Services Act (Regulation of the European Parliament and of the Council of 19 October 2022) is central to how online content is regulated.
It imposes new responsibilities on very large online platforms and search engines aimed at enhancing transparency of content moderation and curbing harmful content including disinformation under certain conditions.
The European Commission has emphasised that this is part of broader efforts to protect democratic processes and strengthen public debate.
In the United Kingdom, the Online Safety Act 2023 creates an expansive set of obligations on providers of internet services to manage harmful content.
Civil liberties organisations have argued that the Act grants broad powers to regulate speech and encourages platforms to take down content pre emptively to avoid liability.
Critics such as the Open Rights Group and Article 19 have warned that such mechanisms risk impinging on freedom of expression and access to information.
Beyond Europe, legal frameworks are emerging that criminalise or civilly sanction the communication of false information. In the Indian state of Karnataka, a draft bill proposes jail terms for spreading so called fake news or content that is defined in vague terms, including anti feminist content and superstition. Free speech advocates warn that such legal language could sweep in a vast range of expression and be used inconsistently.
Singapore’s Protection from Online Falsehoods and Manipulation Act was passed in 2019 with the intention to counter deliberate online falsehoods while attempting to exclude opinion and satire, yet concerns remain about the broad discretion given to authorities and the potential chilling effect on legitimate discourse.
Ethiopia’s Hate Speech and Disinformation Prevention and Suppression Proclamation makes social media users and intermediaries criminally liable for posts deemed to disrupt public order, showing how legal tools can extend to severe penalties for online speech. Human rights organisations have criticised this law as infringing on fundamental expression rights.
These legislative efforts reflect the global urgency with which false information online is treated. However, the very design of these laws often places enormous discretionary power in the hands of regulators or platform operators to decide what content is permissible.
Democracies are rethinking how they legislate free speech in an environment where digital systems mediate so much of public discourse, but many of the legislative responses are bending longstanding protections in ways that deserve careful scrutiny.
The Legal and Constitutional Debate
The core tension at the heart of these laws is the balance between preventing harm and protecting freedom of expression. Democratic constitutions and human rights frameworks often guarantee robust rights to free speech.
In the United States, the First Amendment has been interpreted to protect even false statements in many contexts unless they harm specific individuals through defamation or similar legal channels.
Attempts at regulating deepfakes and election related speech have encountered immediate legal challenge on constitutional grounds. In 2025, X Corp filed a federal lawsuit to overturn Minnesota’s ban on political deepfakes arguing that the law penalises election related expression and violates First Amendment rights.
For a thoughtful discussion on the negative impact of deepfakes in our courts and society, please read our newsletter:
In Murthy v. Missouri, the United States Supreme Court addressed the ability of the federal government to communicate with social media companies about content moderation. The Court allowed government involvement regarding misinformation removal, but this itself sparked controversy about where the line is drawn between legitimate action against falsehoods and the government influencing the decisions of private platforms.
In Europe, freedom of expression is safeguarded under Article 10 of the European Convention on Human Rights and the EU Charter of Fundamental Rights. However, the EU’s regulatory approach does not explicitly define disinformation in statutory text, and enforcement under the Digital Services Act is tasked to both regulators and platforms to curtail disinformation and illegal content.
Scholars have raised serious questions about whether these approaches sufficiently protect free expression rights and whether they empower private companies to act as arbiters of political speech without due process.1
The German Network Enforcement Act, passed in 2017, obliges platforms to remove illegal content within strict timeframes or face steep fines. The law has been widely criticised for creating incentives for platforms to err on the side of removal to avoid penalties, which essentially pushes companies to make judgments about speech that ought to be matters for courts. Critics argue that this dynamic can lead to disproportionate censorship of legitimate expression.
France’s Avia Law, aimed at combating hateful and illegal content, saw major portions struck down by the Constitutional Council because they infringed too broadly on freedom of expression. Yet, the law’s initial passage highlighted how national legislatures are grappling with the challenge of regulating online speech.
These legal debates involve core principles that structure democratic participation. The ability to hear opposing views, to criticise government action, and to engage in public debate are foundational to self governance.
When legal frameworks alter incentives for speech or create mechanisms that lead to over removal of content, there is a collateral effect on democratic engagement.
Democracy Beyond False Statements
It is vital to acknowledge that disinformation does cause harm in itself. False narratives and fake news about elections, public health, or civic processes can mislead citizens and erode trust in institutions.
Technical studies have shown that online propaganda and misinformation contribute to polarisation and societal division.2
At the same time, it is essential to differentiate the regulation of harmful and criminal speech from broad preventive laws that apply across wide categories of expression.
Democracies have long recognised that some kinds of speech such as incitement to violence or defamation can be subject to legal limits.
Nonetheless, new laws aimed at digital disinformation often lack clear definitions that distinguish between intentional manipulation and honest error or opinion.
Legal frameworks that do not ensure procedural safeguards or judicial oversight risk granting regulatory bodies or private entities the power to remove speech without sufficient justification.
If laws are designed in ways that blur these fundamental distinctions, then democracy itself suffers. Citizens are less exposed to the full range of perspectives and have diminished ability to challenge prevailing narratives. A society that cannot tolerate disagreement and dissent risks becoming less resilient, less informed, and ultimately less democratic.
Private Actors as Public Regulators
A critical concern in the modern legal regime on disinformation is the role of private technology companies. Laws like the EU digital services framework and the UK’s online safety model rely heavily on what platforms decide and how they enforce rules.
Platforms are required to act to avoid penalties. This effectively deputises private companies to enforce public norms about speech that historically were the domain of public law and independent courts.
Platforms have commercial incentives and policies that are not always transparent or consistent across jurisdictions. A decision to remove content or to label it as false is made within a corporate governance structure not accountable to the public in the same manner as a democratic legislature or a judicial body.
This dynamic raises a question about legitimacy. Democracies rely on transparent and accountable institutions to define and enforce the legal bounds of expression. When private platforms operate in lieu of those institutions on matters that affect political speech, the democratic quality of that enforcement is uncertain at best.
Legislative frameworks that push more power to platforms without robust oversight risk enabling forms of censorship that are neither subject to clear legal standards nor to democratic accountability.
Citizens often do not know why a piece of content is taken down. They have limited rights to appeal, and they encounter different outcomes depending on where they are located or how a platform interprets vague legal duties.
The International Ripple Effect
Legal approaches in one jurisdiction influence others. The European regulatory model, because of its size and market power, affects global platform behaviour.
US officials have openly criticised the EU Digital Services Act on grounds that it enables censorship beyond European borders and undermines free speech.
These critiques have become part of international diplomatic friction, with officials from one democracy publicly challenging another democracy’s approach to regulating digital speech.
At times, national courts take positions that reshape legal obligations. The Brazilian Supreme Court’s 2025 ruling that social media platforms can be held liable for user posts without judicial orders puts pressure on platforms to act quickly to remove harmful content or face legal risks. Such rulings create asymmetric incentives for moderation that may encourage broad content removal to mitigate liability, again with potential impacts on democratic debate.
Other countries, including in South Asia and Africa, are adopting laws that grant governments sweeping control over content under the pretext of security or misinformation control. These laws often intersect with broader patterns of suppression and have severe consequences for dissenting voices and civic freedoms.
The international proliferation of disinformation laws creates a patchwork of standards that technology companies must navigate.
In the absence of clear, globally accepted norms that respect free expression while addressing harm, there is a risk that legal models that prioritise control over speech will be adopted more widely, with negative consequences for democratic engagement.
What Democracy Requires
Democracy depends on open public discourse where ideas can compete and be tested against evidence and argument. The impulse to regulate falsehood is understandable, yet legal solutions must be calibrated carefully to respect foundational rights. Laws that are vague in definition, broad in scope, or lack adequate procedural protections risk being repurposed to silence dissent or to restrict controversial opinions that are essential to democratic deliberation.
Legal frameworks to counter disinformation should ideally operate with clear definitions, predictable procedures, and robust safeguards that protect legitimate political speech. They must uphold independent oversight and judicial review so that when speech is restricted, there is accountability and transparency in the process.
The current wave of disinformation regulation reflects a genuine concern about the impact of false information, but there is a real danger that in addressing this issue through broad legal authority, democracies may erode the very principles of free expression and accountability that make them resilient.
Links in this article reference public legal developments and reporting from credible sources. Readers are invited to examine the original materials themselves to further understand the content and context of the legislative and judicial activities discussed.
What are your thoughts about disinformation and fake news undermining democratic practices?
Husovec, M. (2024). The Digital Services Act’s red line: what the Commission can and cannot do about disinformation. Journal of Media Law, 16(1), 47–56. https://doi.org/10.1080/17577632.2024.2362483 . Also see Eskens, S. (2025) ‘The role of the Regulation on the transparency and targeting of political advertising and European Media Freedom Act in the EU’s anti-disinformation strategy’, Computer Law & Security Review, 58. https://doi.org/10.1016/j.clsr.2025.106185
Gupta, M., Dennehy, D., Parra, C. M., Mäntymäki, M. and Dwivedi, Y. K., 2023. Fake news believability: The effects of political beliefs and espoused cultural values. Information & Management, 60(2), p.103745. Available at: https://www.sciencedirect.com/science/article/pii/S0378720622001537 . Also see Lana-Blond R, García-Saiz M. The Fake News and Polarization Landscape: Scoping Review of Fake News Research and its Link with Attitude Polarization. The Spanish Journal of Psychology. 2025;28:e19. https://doi.org/10.1017/SJP.2025.10010





This brings up another element which is how freedom of expression can be protected when disinformation laws operate like this through automated moderation systems. Very interesting article.