Why Governments Are Tempted to Replace Judges With Automated AI Systems
Newsletter Issue 88: Governments consider AI judges to reduce cost and delay, yet risk undermining fairness, transparency, accountability, and public trust in justice.
Governmental interest in automated AI judges is driven less by justice and more by administrative convenience, fiscal pressure, and institutional control. This newsletter exposes how AI promises speed, uniformity, and predictability while threatening transparency, accountability, and human judgment. The analysis challenges the assumption that efficiency improves justice, warning that delegating adjudication to AI risks transforming courts into instruments of bureaucratic optimisation rather than guardians of legal legitimacy.
Newsletter Issue 88
The Administrative Fantasy of Justice Without Human Intervention
I might raise some eyebrows by saying this, but some governments appear eager to place your fate in the hands of an algorithm. In fact, in China, millions of legal cases have already been decided by “Internet courts” staffed with non-human judges operating entirely online.
These AI-driven cases handle everything from e-commerce disputes to loan defaults, and remarkably 98% of their rulings have been accepted without appeal.
It might be startling to learn that a court with no human judge present can ask questions, weigh evidence, and issue verdicts via a holographic avatar, as demonstrated in Hangzhou’s Internet Court.
The system even runs 24 hours a day, 7 days a week – a round-the-clock justice delivery that no human judge could ever match.
When I first heard about this, I felt equal parts fascinated and uneasy. I want to talk to you about why some governments are so tempted by the idea of AI judges, and what it means for the future of justice.
The Allure of Automated Justice
I have spent enough time around the legal system to know that efficiency is the magic word that every government loves to hear. Courtrooms around the world face crushing case backlogs and slow proceedings. The notion of an AI “robot judge” promises lightning-fast case processing, potentially clearing dockets that human judges struggle to manage.
Sometime ago, media reports breathlessly announced that Estonia was building exactly such a robot judge to handle small claims disputes and help clear its backlog. Officials supposedly wanted an algorithm to analyze documents and reach decisions on claims under €7,000, which a human judge would then simply review.
The project was touted as one of the most advanced examples of AI in the judiciary, justice by artificial intelligence. What exactly was the promise? Quicker resolutions, lower costs, and no more waiting months for a court date.
We later learned that this story was more hype than reality. In 2022 the Estonian Ministry of Justice felt compelled to clarify it never had any project or even ambition to replace human judges with AI.
That did not stop the idea from capturing imaginations. And frankly, it is easy to see why the idea is seductive.
Automating routine court decisions could free up human judges for the hardest cases and eliminate mundane tasks.
In theory, an AI judge might churn through straightforward disputes with the detached precision. One enthusiastic commentator described the potential “cost savings” as “astronomical”, noting that an AI system could issue a preliminary decision in under a month, and unlike a human it “is not hungry or tired”.
Governments hear things like that and start picturing leaner budgets and ultra-efficient courts.
Consistency and objectivity are another part of the allure. Ask any lawyer and they will tell you that different judges can reach very different outcomes on similar facts.
Humans are fallible and sometimes inconsistent, but an AI algorithm can be replicated and standardized nationwide. In China, the ruling Communist Party has explicitly embraced “Smart Courts” as a way to improve consistency in judicial outcomes and increase public trust in the courts.
From the perspective of central authorities, if every courtroom uses the same AI system, you eliminate rogue judges who might deviate from policy and you ensure each defendant is subject to the same decision criteria everywhere.
The temptation here is control: AI systems can be centrally updated or monitored, giving governments more oversight over the judicial process.
In an authoritarian system, that consistency also conveniently enables centralized oversight and intervention in cases when needed.
Even in democracies, one can see the appeal of a tool that might render justice more predictable – no more “judge lottery” where your fate depends on who happens to preside.
There is also a public perception angle. Many people have stories of judicial errors, biases, or high-profile miscarriages of justice. An AI, on the surface, carries none of the human baggage; it does not have bad days, it does not hold grudges, it does not get star-struck or prejudiced.
Some technocrat policymakers genuinely believe that a well-designed AI could be more impartial than a human judge. For example, in traffic or parking fine disputes, an algorithm might decide purely on data (Was the car parked too long? Did the camera catch the license plate?) without being swayed by a defendant’s attitude or appearance.
We have already seen glimpses of this: a UK-developed chatbot lawyer app famously overturned 160,000 parking tickets by automatically generating appeals, exploiting the system’s own rules.
If an AI can beat humans at following legal rules for parking fines, why not let it decide them in the first place?
Also, let us not forget speed and accessibility. A digital judge could in theory handle cases via video or online submissions, delivering rulings in hours rather than weeks.
During the COVID-19 pandemic, courts around the world turned to Zoom and online systems out of necessity. This accelerated acceptance of online disputes resolution.
Tech-forward nations like China took it further. Their Internet courts allow citizens to file cases online, attend hearings by video chat, and receive judgments electronically.
Such systems can dramatically lower the barrier to accessing justice, especially for people in remote areas or with small claims not worth a costly legal battle.
Proponents say AI judges are a natural next step to make justice system more user-friendly. No intimidating courthouse, just an app on your phone that issues a ruling.
In Beijing’s Internet Court, the average case reportedly concludes in just 40 days, with hearings under an hour on average.
Imagine a civil claim being filed, argued, and resolved in a matter of weeks entirely through an online platform. That is the vision.
Reality Check: Experiments in AI Judging
We should acknowledge that so far, fully automated judges remain mostly experimental.
China is the clear frontrunner. Since 2017, its specialized Internet Courts have been operating in cities like Hangzhou, Beijing, and Guangzhou.
The “judges” in these courts are effectively AI-powered decision systems with a virtual judge interface. They handle a surge of cases tied to the digital economy (online commerce disputes, copyright for online content, etc.), and by all accounts the volume is immense.
In just a six-month period in 2019, users conducted over 3.1 million legal activities through the Hangzhou Internet Court platform.
One striking statistic from Beijing is that over a million people have used the system, yet appeals are extraordinarily rare, nearly all parties accept the AI’s verdict.
The platform even populates a life-like avatar of a judge, a 3D modelled person in judge’s robes, who can ask the litigants questions on screen. Chinese reporters have sat in on these virtual hearings and watched a holographic judge guide the proceedings.
Other countries have flirted with AI judging in narrower contexts. You might have heard of proposals to use AI for small claims and disputes.
While Estonia’s “robot judge” turned out to be a myth, the very idea sparked global discussion.
There is a kernel of reality in these ideas: Online dispute resolution (ODR) systems are growing, and they often involve automated decision-making for low-stakes issues.
For instance, Canada has experimented with ODR for simple civil claims and certain tribunals, where an algorithm or guided system helps the parties reach a resolution or even renders a proposed outcome.
These systems stop short of a final binding judgment by AI, but they inch close. If an AI mediator proposes an outcome and both sides accept, effectively the AI “decided” the case.
In the United States, meanwhile, we see a different path to using AI to assist judges rather than replace them, at least for now. American courtrooms in several states have started integrating algorithmic risk assessment tools to guide decisions on things like bail and sentencing.
These programs, such as the well-known COMPAS system, generate scores predicting a defendant’s likelihood of reoffending. Judges then consider those scores when deciding whether someone gets bail or how long a sentence to give.
In theory it is just advisory, but in practice it can heavily sway outcomes. This approach has been contentious – I will get to the controversies in a moment – but it shows how even in a justice system that values jury trials and human judges, some decision-making is being handed to algorithms.
In fact, the Wisconsin Supreme Court confronted this issue in a notable case: State v. Loomis 881 N.W.2d 749 (Wis. 2016). In that case a defendant argued it was unfair that an opaque algorithm helped determine his prison sentence.
The court upheld the sentence but cautioned that such risk scores should be only a supplementary factor and must not replace a judge’s own judgment.
The message from the ruling was essentially in that you can use the AI’s recommendation, but the human judge is still responsible for the decision.
That seems to be a critical point. So far, even the most ambitious experiments usually leave a human with some liability. For instance, China allows appeals to a human judge if someone disagrees with the AI outcome.
The vision of completely replacing judges in all cases remains more of a thought experiment than a reality in 2026. However, every year, the line nudges forward.
Just recently, I saw news of two U.S. judges who admitted their staff had used ChatGPT-style AI tools to draft portions of judicial orders, with disastrously erroneous results that slipped through. This caused quite a stir, prompting calls for clearer rules on how judges and clerks may use AI.
Indeed, it is a reminder that even if we are not handing the gavel to an online judge today, AI is already seeping into the judicial process in bits and pieces, for better or worse.
Blind Justice or Biased Algorithms?
Before we all warm up to the inevitable, we need to discuss frankly about the serious concerns and limitations of AI as a Judge.
The promise of impartiality from an AI is not a given at all. Yes, an AI comes without human emotions, but it inherits the biases of its data and programming.
That risk is present. In the United States, the use of COMPAS and similar sentencing algorithms has become a cautionary tale. Investigative journalists discovered that COMPAS was far more likely to falsely label Black defendants as high-risk future criminals compared to white defendants. Think about that; a supposedly objective tool was doubling the rate of false “future crime” flags for Black individuals.
Meanwhile it was more often underestimating risk for white defendants, incorrectly giving them low risk scores.
The bias was not deliberate in the sense of someone programming racism openly; it emerged from patterns in the underlying data and how the algorithm was developed.
Should we regulate the developers or the AI model? Read our newsletter to learn more about regulatory approaches:
From the perspective of the defendant who is denied bail because a computer indicated so, the distinction does not matter, it just feels like algorithmic prejudice.
This is a nightmare scenario for automated justice, that we might replace human bias with a hidden, harder-to-challenge bias embedded in code.
Another glaring issue is transparency and the right to explanation. In a courtroom, a human judge is expected to explain their reasoning in a judgment. How do you question an algorithm’s reasoning?
Many AI models, especially advanced machine learning ones, operate as a “black box”. They crunch inputs and produce outputs, and not even the developers can easily explain the exact decision path.
This was a core of Mr. Loomis’s complaint in the Wisconsin case as he had no way to challenge the COMPAS algorithm because its inner workings were proprietary and opaque.
It is fundamentally troubling to imagine standing in court, hearing “the computer says you are high risk, so you get a harsher sentence,” and not being able to assess that computer’s logic.
Due process in many countries includes the right to review and contest evidence, but does an algorithm’s output count as evidence? If so, can the defendant demand access to the source code?
These legal questions are largely unresolved. There is a growing movement insisting that “algorithmic accountability” and transparency must accompany any use of AI in government decisions affecting rights.
Without transparency, an AI judge would be an inscrutable authority, which undermines a fundamental principle of justice. Decisions must not only be fair, but be seen to be fair and reasoned.
We also have to consider empathy and moral judgment, qualities that we expect, or at least hope, human judges will possess.
A judge in a courtroom can look a defendant in the eye, can consider pleas for mercy, can weigh intangible factors like remorse. Can an AI do that? Not really, not yet.
An algorithm scores what it is given to score. It might be great at parsing legal texts and past cases, but justice is more than black letter law application.
There are fears that an automated judge would be ultra-strict, after all, a computer won’t bend a rule or overlook a nuance unless explicitly programmed to.
It has been argued that mercy, context, and human intuition are indispensable to justice, and these are things no AI can replicate. I find it hard to imagine an AI automated system exercising equitable discretion or interpreting a law in light of evolving social values.
At best, it will do what it’s told; at worst, it might rigidly enforce even unjust laws without question.
A chilling thought is that an authoritarian government could program an AI judge to enforce oppressive laws to the letter, with flawless efficiency and no dissent. Human judges can resist immoral orders; AI might not have that capacity.
The accountability problem looms large too. If a human judge makes a grievous error or is biased, there are mechanisms (imperfect as they are) to appeal, to seek review, even to impeach or discipline the judge.
If an AI judge makes an error, say, convicts an innocent person due to a glitch or biases, who is responsible? The company that developed the software? The government officials who deployed it? It becomes a blame-shifting fiasco.
Also, how would such an error even be discovered? If all judges were algorithms applying the same model, a systemic flaw could go unnoticed for a long time, affecting thousands of cases before anyone realizes. That scenario should keep anyone up at night more than we care to admit!
Collision Course with the Law
You might be wondering whether governments even legally hand over judging to AI? In many places, this idea runs into legal roadblocks, and rightly so.
For example, in Europe there are strong legal guarantees of a fair trial and human accountability. The European Union’s data protection law (GDPR) explicitly gives people the right not to be subject to a decision based solely on automated processing if it has significant effects on them.
What is the future of European courts with the integration of AI in the judicial process? Find out more:
A judgment in court is arguably the most significant decision one could face.
This means that under current EU law, you can insist on a human review of an automated decision.
A fully AI judge with no human involvement would be at variance with that right, unless EU countries carve out an exception in law. Indeed, the EU is actively working on regulations for AI.
The EU Artificial Intelligence Act classifies any AI used in judicial decision-making as “high-risk”, recognizing the profound impact on rule of law and individual rights.
The EU lawmakers have even inserted a clear principle in the latest draft: AI may support judges but must not replace them, the final decision must always be made by a human.
In other words, European policy is drawing a red line that no matter how advanced AI gets, an actual person should be the one delivering the verdict when it comes to dispensing justice.
Are lawyers losing control to AI? Is the legal profession at risk of AI proliferation and to what extent? Read our newsletter to find out more:
This aligns with the prevailing view in international human rights law that everyone is entitled to have their case heard by a “competent, independent and impartial tribunal”, phrases which presume a human judge sitting and listening.
Outside Europe, these debates are just as alive. In the United States, there is no single law like the GDPR’s Article 22, but the judiciary itself is proceeding cautiously. A U.S. federal judicial task force recently warned judges not to delegate core judicial functions to AI, especially not the act of adjudicating a case.
The concern is that even if AI can help with research or drafting, the judge must remain fully responsible and engaged in the decision.
Influential voices in the legal community are emphasizing that AI should assist, not replace human judgment.
Even China, for all its tech enthusiasm, maintains a fig leaf of human involvement, as noted, one can appeal an AI court’s decision to a human judge, and Chinese officials portray their Smart Courts as enhancing efficiency while still maintaining human oversight.
The official line is that technology should support the decision-making power of judges, but not supplant it.
Of course, the reality of how much human oversight is actually happening in China’s Internet Courts might be debatable, but at least on paper, they acknowledge a human backstop.
It is interesting that even governments keen on automation still feel the need to reassure the public that judges are not being entirely replaced.
That tells us something that there is a deeply ingrained cultural and constitutional importance to the human judge as a symbol of justice.
Replacing that human figure with a AI is bound to provoke resistance. Indeed, when the idea of the robot judge in Estonia made headlines, it sparked plenty of discussion in Europe about the ethics and legality of such a move.
The pushback from the legal community was swift, and as we saw, the government backed away, calling it misleading news.
A law was passed to protect human judges from being quantified by AI, ironically.
While some authorities fantasize about AI judges to streamline justice, others are enacting laws to shield the human element in judging.
A Careful Path Forward
As someone deeply fascinated by the intersection of law and technology, I understand both the temptation and the fear at play here. When talking to non-lawyers about this, I often find people initially intrigued. Who would not want faster and cheaper court cases?
Then arrives the viscerally uncomfortable thought of an AI driven robot pronouncing a prison sentence or deciding a child custody dispute. That discomfort is healthy. It means we recognize something vital that justice is not merely computational. The rule of law is as much about process and trust as it is about outcomes.
We trust a human judge to hear us, even if they might err, because we ascribe legitimacy to that human deliberation. Transferring that trust to an AI is a huge leap.
Governments will likely continue to experiment with AI around the edges of the judicial process.
We will see more AI legal assistants that help judges research cases or draft routine parts of judgments.
We will see expanded online courts for minor issues, possibly with automated mediation and suggested resolutions.
Be that as it may, there is a long road from there to a world where you walk into a courthouse (or log into one) and face an AI judge as the final arbiter.
For the foreseeable future, fully autonomous AI judges remain a controversial proposition fraught with legal and moral peril.
Even the most advanced AI cannot yet replicate the full spectrum of human judgment, especially the qualities of compassion, wisdom, and public accountability that we expect from our judges.
I have confidence that many of the safeguards in law will hold, and that any use of AI in judging will be accompanied by the right to a human appeal or review.
International principles are already crystal clear on one thing: technology should serve humanity, not displace it in the courtroom.
So, is an AI judge coming for your day in court? The honest answer is that governments are tempted, yes, but also wary. They stand to gain efficiency and control, yet they risk losing the public’s faith in justice if they go too far.
For now, I am relieved that the consensus is to move carefully. We should demand nothing less. The day may come when AI takes its place on the bench, but if and when that happens, I suspect we will still want a human behind the gavel of justice.




But at what point does efficiency in adjudication undermine the constitutional purpose of courts?