Newsletter Issue 86: Deepfake technology destabilises courtroom trust by weakening confidence in recorded evidence and exposing structural limits of traditional evidentiary assessment.
Thanks for the detailed and interesting analysis. I think that AI manipulation of documents, audio and video presents a big problem in and out of the courtroom. I understand that there is already some AI technology designed to detect AI manipulation and I assume that such technology will get better as there is certainly a demand for it. Courts will probably continue using forensic expert testimony, and the experts will be using specialized detection AI. The problem will not be solved, but it can be managed. It's the usual cat and mouse game of tech and counter-tech.
You are absolutely right that detection tools are improving and that courts might start to rely on forensic experts who themselves use increasingly specialised AI systems. That is already happening in some jurisdictions. What feels important to underline, though, is that detection technology does not restore the earlier position where reliability could be assumed once an expert was involved. We could say that detection remains probabilistic, resource intensive which is prohibitive in low income countries, and unevenly accessible.. Managing the problem, as you put it, is likely the realistic outcome. The deeper issue is that courts must now operate while openly acknowledging residual doubt, rather than treating expert confirmation as a definitive endpoint. I think that kind of changes (or it should change for the most part) how evidence is weighed, explained, and trusted, even when the process works as well as it possibly can.
That's an interesting question/observation. To my mind, I see it emerging more frequently in civil disputes, not necessarily because the consequences are lighter, but because the evidence thresholds are lower and the volume/quantity of digital material tends to be much higher. Civil cases also increasingly rely on e.g, recordings of meetings, messages, workplace interactions, and online communications, many of which are informal, poorly preserved, and detached from clear chains of custody. That environment makes contested authenticity (viz-a-viz deepfakes) easier to introduce and potentially harder to resolve. Criminal courts will also feel the impact sharply when it arises, but procedural safeguards, disclosure duties, and higher standards of proof will somewhat slow the spread I think.
Excellent piece! Thank you for articulating this so well. I will add that many of these issues are exacerbated in the context of low economic income countries and some middle income ones. AI is available anywhere there’s an internet connection (look at the Yahoo boys use of AI in Nigeria, or RSF in Sudan), but the ability of the Justice system to utilize experts is almost non-existent. Deepfakes are going to be bad in the UK, Europe/ China etc. They could lead to devastating miscarriages of justice in cases in countries with no access to AI experts.
I couldn't have said it better! The availability of AI tools does not correlate with the capacity of courts to interrogate them, and that imbalance, you could say, is far more acute in low income and many middle income jurisdictions as indicated already. Where expert evidence is scarce, underfunded, or institutionally distrusted, the problem becomes intrinsically structural. The danger lies in that it is not only that fabricated material enters proceedings, but that genuine evidence is discounted because courts lack the resources (may I say financial?) to evaluate authenticity with confidence. That combination increases the likelihood of serious error in ways that are difficult to remedy "after the fact". This point deserves much more attention in comparative and international discussions of AI and justice, particularly outside the usual focus on well resourced legal systems like we see in the UK and US.
I couldn’t agree more! Good point re: hard to remedy after the fact as well, and well noted on its intrinsically structural nature. It’s a deep and complex problem that is not going to be solved with education alone (as you note in your article).
Thanks for the detailed and interesting analysis. I think that AI manipulation of documents, audio and video presents a big problem in and out of the courtroom. I understand that there is already some AI technology designed to detect AI manipulation and I assume that such technology will get better as there is certainly a demand for it. Courts will probably continue using forensic expert testimony, and the experts will be using specialized detection AI. The problem will not be solved, but it can be managed. It's the usual cat and mouse game of tech and counter-tech.
You are absolutely right that detection tools are improving and that courts might start to rely on forensic experts who themselves use increasingly specialised AI systems. That is already happening in some jurisdictions. What feels important to underline, though, is that detection technology does not restore the earlier position where reliability could be assumed once an expert was involved. We could say that detection remains probabilistic, resource intensive which is prohibitive in low income countries, and unevenly accessible.. Managing the problem, as you put it, is likely the realistic outcome. The deeper issue is that courts must now operate while openly acknowledging residual doubt, rather than treating expert confirmation as a definitive endpoint. I think that kind of changes (or it should change for the most part) how evidence is weighed, explained, and trusted, even when the process works as well as it possibly can.
Thanks for sharing this article. Do you see this problem emerging more in criminal cases, civil disputes?
That's an interesting question/observation. To my mind, I see it emerging more frequently in civil disputes, not necessarily because the consequences are lighter, but because the evidence thresholds are lower and the volume/quantity of digital material tends to be much higher. Civil cases also increasingly rely on e.g, recordings of meetings, messages, workplace interactions, and online communications, many of which are informal, poorly preserved, and detached from clear chains of custody. That environment makes contested authenticity (viz-a-viz deepfakes) easier to introduce and potentially harder to resolve. Criminal courts will also feel the impact sharply when it arises, but procedural safeguards, disclosure duties, and higher standards of proof will somewhat slow the spread I think.
Excellent piece! Thank you for articulating this so well. I will add that many of these issues are exacerbated in the context of low economic income countries and some middle income ones. AI is available anywhere there’s an internet connection (look at the Yahoo boys use of AI in Nigeria, or RSF in Sudan), but the ability of the Justice system to utilize experts is almost non-existent. Deepfakes are going to be bad in the UK, Europe/ China etc. They could lead to devastating miscarriages of justice in cases in countries with no access to AI experts.
I couldn't have said it better! The availability of AI tools does not correlate with the capacity of courts to interrogate them, and that imbalance, you could say, is far more acute in low income and many middle income jurisdictions as indicated already. Where expert evidence is scarce, underfunded, or institutionally distrusted, the problem becomes intrinsically structural. The danger lies in that it is not only that fabricated material enters proceedings, but that genuine evidence is discounted because courts lack the resources (may I say financial?) to evaluate authenticity with confidence. That combination increases the likelihood of serious error in ways that are difficult to remedy "after the fact". This point deserves much more attention in comparative and international discussions of AI and justice, particularly outside the usual focus on well resourced legal systems like we see in the UK and US.
I couldn’t agree more! Good point re: hard to remedy after the fact as well, and well noted on its intrinsically structural nature. It’s a deep and complex problem that is not going to be solved with education alone (as you note in your article).