News
How to determine the admissibility of AI-generated evidence in courts?
Types of AI-generated content
Predictive AI models can provide insights on future events, while biometrics aid in identification, and AI transcription services convert audio into written transcripts for court evidence. These are only some examples of AI-generated evidence.
Judges face challenges in evaluating the admissibility of such evidence with concerns related to reliability, transparency, interpretability, and bias in such evidence. This challenge becomes even more salient with the use of generative AI systems, which are contributing to misinformation and disinformation at scale. An example of such AI generated content is the image showing the pope wearing a white, puffy jacket, which seemed to be genuine.
Key questions for judges and lawyers
Now, imagine if an image portrayed a political leader engaging in criminal activity. In such a scenario, how would a lawyer or judge demonstrate the authenticity of such an image? How can a judge determine that the image is AI-generated and not real? Moreover, in addition to the numerous risks that affect the authenticity and reliability of evidence, the opacity in AI algorithms hampers transparency, while bias in training data can lead to discriminatory outcomes. The absence of standard guidelines on how to verify AI-generated evidence complicates the decision-making process.
Self-driving cars present another real-world example of the challenges related to electronic evidence. For instance, there is uncertainty around how a drowsiness detector鈥檚 data could be used in inquisitorial or adversarial justice systems to determine liability for an accident? How will this data be made available for criminal investigation? Would machine data based on human-machine interaction count as evidence? We must assess the accuracy and limitations of the AI system's data, determine responsibility in the event of accidents or disputes, and understand the reasoning behind the system's decisions.
Role of Judicial Operators
Judges play a crucial role in assessing the admissibility of AI-generated evidence, and they must learn to navigate the intricacies of AI to make well-informed decisions regarding its admissibility. Judges should develop an understanding of the algorithm, the specific data used for training, AI principles, biases and the potential misuse of AI systems, like deepfake.
Recognizing the need for discussion and capacity building on this topic, UNESCO organized a webinar titled "The Admissibility Challenge: AI-Generated Evidence in the Courtroom". In collaborating with the Inter-American Court of Human Rights (Costa Rica), the National Judicial College (United States), Lawyers Hub (Kenya), and the Center for Communication Governance, National Law University (India), the discussion tackled the complexities surrounding the admissibility of AI-generated evidence.
The event brought together around 400 participants and a diverse panel of experts who engaged in insightful discussions on this pressing topic. The speakers included Isabela Ferrari, Federal Judge of Brazil, Prof. Sabine Gless, an expert in criminal law and criminal procedure, Judge(retd.) Paul Grimm, Professor at Duke Law, Dr. Andrew Rens, Senior Research Fellow with expertise in information communication technology, and Stephen Mason, expert on electronic signatures and electronic evidence.
Additional Resources proposed by the experts
Brian W. Esler, , Digital Evidence and electronic Signature Law Review, Volume 4:2007.
and Stephen Mason, , Singapore Academy of Law Special Issue on Law and Technology, (2021) 33 SAcLJ 241-279, 4 Oct 2021.
Paul W. Grimm and Kevin F. Brady, , 2018.
, , , ,
Northwestern Journal of Technology and Intellectual Property, Volume 19, Issue 1, 2021., , Daniel G. Brown, Molly XU, , Duke Law & Technology Review, Vol. 23, No. 1, May 2023. To Appear in Vol. 23, Iss. 1 of Duke Law & Technology Review (Oct. 2023).
, , , , January 29, 2016.
Sabine Gless, , .
, , and , , Jurimetrics Vol. 62 Issue No. 3 (2022) 285-302, Oct 2022.
Stephen Mason and Daniel Kiat Boon Seng, , Series Observing Law, published by the , , 2017.
Open-source practitioner text for judges, lawyers and legal academics:
Stephen Mason and Daniel Seng, editors, (5th edition, Institute of Advanced Legal Studies for the SAS Humanities Digital Library, School of Advanced Study, University of London, 2021.
Open-source journal:
, also available via the HeinOnline subscription service and