Types of AI-generated content
Predictive AI models can provide insights on future events, while biometrics aid in identification, and AI transcription services convert audio into written transcripts for court evidence. These are only some examples of AI-generated evidence.
Judges face challenges in evaluating the admissibility of such evidence with concerns related to reliability, transparency, interpretability, and bias in such evidence. This challenge becomes even more salient with the use of generative AI systems, which are contributing to misinformation and disinformation at scale. An example of such AI generated content is the image showing the pope wearing a white, puffy jacket, which seemed to be genuine.
Key questions for judges and lawyers
Now, imagine if an image portrayed a political leader engaging in criminal activity. In such a scenario, how would a lawyer or judge demonstrate the authenticity of such an image? How can a judge determine that the image is AI-generated and not real? Moreover, in addition to the numerous risks that affect the authenticity and reliability of evidence, the opacity in AI algorithms hampers transparency, while bias in training data can lead to discriminatory outcomes. The absence of standard guidelines on how to verify AI-generated evidence complicates the decision-making process.
Self-driving cars present another real-world example of the challenges related to electronic evidence. For instance, there is uncertainty around how a drowsiness detector’s data could be used in inquisitorial or adversarial justice systems to determine liability for an accident? How will this data be made available for criminal investigation? Would machine data based on human-machine interaction count as evidence? We must assess the accuracy and limitations of the AI system's data, determine responsibility in the event of accidents or disputes, and understand the reasoning behind the system's decisions.