Newsletter Issue 87: Disinformation laws risk weakening democracy when legal and political powers over online speech undermine free expression, accountability, and open public debate.
This is a superb mapping of the legal and institutional risks around current disinformation laws: the delegation of speech control to platforms, the vagueness of statutory terms, and the weak procedural safeguards around removal and appeals leaves me very disappointed in elected officials.
The last decade of empirical work on relationship to information shows that belief and sharing are driven less by the sheer availability of false content and more by well‑documented cognitive and social mechanisms. Things like confirmation bias, motivated reasoning, illusory truth through repetition, affective polarisation, emotional arousal, and in‑group identity signalling, to name a few. Once those are in play, people are not simply passive recipients of lies; they actively co‑produce and defend them because the content serves identity, belonging, and emotional needs as much as informational ones.
Yet these laws, which annoy me, misses that entire vector. It also means that it does not solve the problem.
Thank you for the article. I oddly feel.... seen? Yeah, I'll go with that word.
Thank you for this generous and thoughtful engagement. You are right to locate the problem at the institutional level rather than the informational one. Law is being asked to correct social and cognitive dynamics that it does not fully understand and is poorly equipped to manage. Delegating speech control to platforms while neglecting procedural safeguards creates a structural deficit that no volume of enforcement can repair. I would be keen on checking out further empirical literature as you suggested which i think is crucial because it reminds us that belief formation is active, not passive. Regulation that ignores those dynamics risks symbolic compliance rather than substantive democratic resilience.
I would suggest these as a start. They are on the dense side, but well worth the effort.
Pennycook & Rand, “The Psychology of Fake News” (Trends in Cognitive Sciences, 2021) gives a concise overview of why people believe and share misinformation, covering analytical thinking, motivated reasoning, and social identity dynamics.
Zhou, “Processing of misinformation as motivational and cognitive…” (Frontiers in Psychology, 2024) frames misinformation processing as a mix of motivational and cognitive biases, emphasising that people are not passive recipients of content.
“Cognitive Drivers of Misinformation Belief and Sharing on Social Media” (2025) tests analytical thinking, news literacy, and conspiratorial thinking across US, UK, and Hong Kong samples.
Where these are thinner is on the legal side as I don't know how to bridge the gap for law. That is something that need to be done, but the way it is currently being done seems badly aligned with the psychology. My own bias here is that lawmakers, even when rightly elected, often design laws to appear effective. Why? Because visible action carries political currency for re‑election and for deflecting criticism. That bias makes me sceptical of symbolic, optics‑driven regulation, so I have to be cautious about over‑reading intent.
This is so helpful, thanks. The literature you cite makes the gap very clear, and I agree that law has not yet internalised these cognitive findings in any serious way.
Political speeches aside, while I agree that Platforms play a vital role in regulating disinformation and that role is to a large extent incentivised; we are however seeing a new model especially in Africa, particularly Nigeria where disinformation has been officially criminalised.
Not only that, more recently, the courts now treats disinformation as a privacy issue - penalising the platforms for processing inaccurate personal
Information about data subjects through algorithmic promotion of such content.
This ought not to be so because it further deepens the already unbridled role of Platforms as Public Regulators like you rightly mentioned.
I’m just curious to hear your thoughts on the last issue I mentioned.
I think that last point is where things become genuinely complicated and I totally agree with your points. Reframing disinformation as a privacy or data protection issue does not just expand regulatory reach, it changes the logic of enforcement. Once inaccurate speech is treated as unlawful data processing, platforms start acting defensively as compliance actors rather than as hosts(or keepers) of public debate. In a context like Nigeria, where disinformation has been criminalised and I think I can recall where the 2021 Twitter ban showed how quickly executive power can be exercised over platforms, that combination seems especially worrying dont you think? Especially considering very little space/opportunity left for proportionality or public interest reasoning. Its a sensitive balance to attain.
I agree. The Twitter ban was an overreach. It shows how execute power can be used to curtail freedom of speech. Imagine the right to freedom of speech of over 200m persons being limited by small number of actors.
This brings up another element which is how freedom of expression can be protected when disinformation laws operate like this through automated moderation systems. Very interesting article.
Appreciate your comment. The tension you mentioned is exactly where the democratic risk concentrates. Once enforcement moves from courts to platforms, liability logic naturally starts to replaces constitutional reasoning. Emergency powers then magnify this problem, because temporary exceptions tend to normalise discretionary control over political speech long after the crisis passes.
This is a superb mapping of the legal and institutional risks around current disinformation laws: the delegation of speech control to platforms, the vagueness of statutory terms, and the weak procedural safeguards around removal and appeals leaves me very disappointed in elected officials.
The last decade of empirical work on relationship to information shows that belief and sharing are driven less by the sheer availability of false content and more by well‑documented cognitive and social mechanisms. Things like confirmation bias, motivated reasoning, illusory truth through repetition, affective polarisation, emotional arousal, and in‑group identity signalling, to name a few. Once those are in play, people are not simply passive recipients of lies; they actively co‑produce and defend them because the content serves identity, belonging, and emotional needs as much as informational ones.
Yet these laws, which annoy me, misses that entire vector. It also means that it does not solve the problem.
Thank you for the article. I oddly feel.... seen? Yeah, I'll go with that word.
Thank you for this generous and thoughtful engagement. You are right to locate the problem at the institutional level rather than the informational one. Law is being asked to correct social and cognitive dynamics that it does not fully understand and is poorly equipped to manage. Delegating speech control to platforms while neglecting procedural safeguards creates a structural deficit that no volume of enforcement can repair. I would be keen on checking out further empirical literature as you suggested which i think is crucial because it reminds us that belief formation is active, not passive. Regulation that ignores those dynamics risks symbolic compliance rather than substantive democratic resilience.
I would suggest these as a start. They are on the dense side, but well worth the effort.
Pennycook & Rand, “The Psychology of Fake News” (Trends in Cognitive Sciences, 2021) gives a concise overview of why people believe and share misinformation, covering analytical thinking, motivated reasoning, and social identity dynamics.
Zhou, “Processing of misinformation as motivational and cognitive…” (Frontiers in Psychology, 2024) frames misinformation processing as a mix of motivational and cognitive biases, emphasising that people are not passive recipients of content.
“Cognitive Drivers of Misinformation Belief and Sharing on Social Media” (2025) tests analytical thinking, news literacy, and conspiratorial thinking across US, UK, and Hong Kong samples.
Where these are thinner is on the legal side as I don't know how to bridge the gap for law. That is something that need to be done, but the way it is currently being done seems badly aligned with the psychology. My own bias here is that lawmakers, even when rightly elected, often design laws to appear effective. Why? Because visible action carries political currency for re‑election and for deflecting criticism. That bias makes me sceptical of symbolic, optics‑driven regulation, so I have to be cautious about over‑reading intent.
This is so helpful, thanks. The literature you cite makes the gap very clear, and I agree that law has not yet internalised these cognitive findings in any serious way.
Thank you. Extremely informative.
Thanks for reading David. Glad you enjoyed it.
Political speeches aside, while I agree that Platforms play a vital role in regulating disinformation and that role is to a large extent incentivised; we are however seeing a new model especially in Africa, particularly Nigeria where disinformation has been officially criminalised.
Not only that, more recently, the courts now treats disinformation as a privacy issue - penalising the platforms for processing inaccurate personal
Information about data subjects through algorithmic promotion of such content.
This ought not to be so because it further deepens the already unbridled role of Platforms as Public Regulators like you rightly mentioned.
I’m just curious to hear your thoughts on the last issue I mentioned.
I think that last point is where things become genuinely complicated and I totally agree with your points. Reframing disinformation as a privacy or data protection issue does not just expand regulatory reach, it changes the logic of enforcement. Once inaccurate speech is treated as unlawful data processing, platforms start acting defensively as compliance actors rather than as hosts(or keepers) of public debate. In a context like Nigeria, where disinformation has been criminalised and I think I can recall where the 2021 Twitter ban showed how quickly executive power can be exercised over platforms, that combination seems especially worrying dont you think? Especially considering very little space/opportunity left for proportionality or public interest reasoning. Its a sensitive balance to attain.
I agree. The Twitter ban was an overreach. It shows how execute power can be used to curtail freedom of speech. Imagine the right to freedom of speech of over 200m persons being limited by small number of actors.
This brings up another element which is how freedom of expression can be protected when disinformation laws operate like this through automated moderation systems. Very interesting article.
Appreciate your comment. The tension you mentioned is exactly where the democratic risk concentrates. Once enforcement moves from courts to platforms, liability logic naturally starts to replaces constitutional reasoning. Emergency powers then magnify this problem, because temporary exceptions tend to normalise discretionary control over political speech long after the crisis passes.