
Image générée par ChatGPT
Artificial intelligence is no longer confined to administrative tasks or legal research. It is now entering one of the most sensitive areas of the legal system: criminal justice. From police investigations to courtroom evidence, AI is beginning to influence how cases are built, analyzed, and challenged.
These developments were at the center of a recent webinar titled AI in Criminal Cases: The Courts’ Role in Preserving Constitutional Rights. The session is part of the AI and Courts series, co-hosted by the National Center for State Courts (NCSC) and the Thomson Reuters Institute. The series brings together legal professionals, judges, and technologists to explore how AI is reshaping justice systems.
In this context, a central question arises: how can courts integrate these technologies without compromising fundamental legal principles?
Rather than focusing on abstract risks, the discussion examined how AI is already being used in practice and what this means for courts, lawyers, and individual rights. One message, however, came through clearly: AI can support the system, but it cannot replace human judgment.
AI Across the Criminal Process
AI is now present at multiple stages of criminal cases. In practice, law enforcement uses it to analyze large volumes of data, including financial records and digital evidence. Among these uses, facial recognition is one of the most visible applications.
On the prosecution side, AI helps manage complex evidence. It can process large datasets, translate materials in different languages, and organize timelines for trial. These tools have already been applied in complex cases involving multilingual evidence, helping to significantly reduce the time required for analysis.
More broadly, these technologies continue to evolve. What began as single interactions with large language models is now shifting toward systems where multiple AI “agents” operate in parallel. These agents can function as researchers, assistants, or drafting tools working together, making AI-assisted work both faster and more efficient.
In addition, AI is used to review body camera footage and identify sensitive information before disclosure, which helps manage increasingly data-heavy cases more effectively.
At the same time, this rapid use of AI has not always been accompanied by clear institutional policies. In some jurisdictions, concerns about accuracy have already led prosecutors to refuse cases relying on AI-generated police reports, particularly when those reports are derived from automated transcription or summarization tools.
AI Capabilities and Their Limits
AI systems continue to evolve rapidly, offering increasingly sophisticated capabilities. They can generate reports, identify patterns, and assist with legal documents.
However, these capabilities must be approached with caution, as several limitations remain:
- hallucinations and inaccurate outputs
- lack of transparency in how results are generated
- uncertainty about data sources
- risks related to data security
The issue of transparency is particularly significant. In many cases, AI systems function as “black boxes,” making it difficult to understand how a result was produced. This situation, however, is not entirely new. Similar legal tensions have emerged in the past with technologies such as breathalyzer machines, where manufacturers resisted disclosing underlying methods in order to protect trade secrets.
A comparable dynamic is now emerging with AI, creating a fundamental tension between corporate intellectual property and a defendant’s right to challenge the evidence used against them.
As a result, human verification remains essential at every stage.
These concerns are not merely theoretical, as cases of wrongful arrest linked to overreliance on facial recognition have already demonstrated the risks of treating AI outputs as conclusive.
AI must therefore be understood as a tool, not a source of truth.
Bias, Fairness, and Equal Protection
Concerns about bias remain central to the use of AI in criminal justice. Risk assessment tools such as COMPAS have shown that algorithmic systems can reproduce and reinforce existing inequalities. In particular, they may disproportionately classify certain groups as high risk, raising serious concerns about fairness.
Bias can emerge at multiple levels:
- in the training data used to develop the model
- over time as systems evolve
- in the interpretation of outputs
- in the way systems are used in practice
In other words, bias is not only a technical issue, but also a structural and human one.
As a result, without careful oversight, the use of AI risks undermining core principles of equal protection and fairness.
New Forms of Evidence in Court
Courts are increasingly required to assess new forms of evidence that traditional legal frameworks were not designed to address.
Technologies such as virtual reality, AI-enhanced video, and synthetic media are now being introduced in legal proceedings. While these tools may offer valuable insights, they also raise complex questions about admissibility, reliability, and probative value.
In this context, courts must determine:
- how AI-generated or AI-enhanced evidence should be authenticated
- what standards should govern its admissibility
- how the underlying technology should be explained
In response, more proactive approaches are beginning to emerge. Pretrial orders may require parties to disclose whether AI was used to gather or modify evidence and to explain how it was applied.
Beyond this, a deeper legal issue arises regarding the burden of proof. When evidence is suspected of being AI-generated or altered, it is not always clear which party must establish its authenticity or challenge it. This uncertainty reflects broader discussions about adapting evidentiary rules to account for machine-generated content.
At the same time, a notable contrast remains. While the evidence presented in court is becoming increasingly technological, juries continue to evaluate that evidence using traditional methods, often without access to tools that could assist their understanding.
Deepfakes and Digital Evidence
As digital technologies evolve, evidence is becoming increasingly difficult to verify. Images, videos, and messages can now be modified or generated with ease, which makes assessing their authenticity significantly more challenging. This challenge is particularly significant in cases involving self-represented individuals, where altered or fabricated materials may be submitted without any clear means of verification.
In response, more rigorous verification practices are beginning to emerge:
- requesting contextual sequences of images, such as those taken immediately before and after
- examining metadata, including Exif data and device identifiers
- verifying storage sources and file history
- relying on expert testimony to explain how content was produced or altered
For this reason, digital evidence must be carefully presented and clearly explained.
The Fourth Amendment in the Age of AI
AI is transforming how information is collected, but it does not change fundamental constitutional protections. In public spaces, individuals generally have no reasonable expectation of privacy, and as a result, law enforcement may use tools such as facial recognition or license plate readers to collect data without a warrant.
However, when it comes to private data—particularly digital devices—stricter standards apply, and courts increasingly examine how AI interacts with search warrants and probable cause requirements. Even when AI systems identify potentially illegal content, human verification and judicial authorization remain necessary.
In short, while AI may assist in identifying information, it cannot replace legal safeguards.
Confidentiality and Attorney Practice
The use of AI by legal professionals also raises important concerns.
In particular, inputting confidential information into open AI systems creates risks regarding data storage, reuse, and potential breaches of attorney-client privilege.
In this context, a key distinction is emerging between “closed” and “open” AI systems. Closed systems rely on curated and controlled data sources, while open systems generate outputs based on broader and less predictable inputs.
This distinction directly affects reliability, confidentiality, and professional responsibility.
Vendors and Institutional Constraints
The increasing reliance on private AI vendors introduces additional challenges. In practice, while many tools are marketed as reliable, their performance does not always meet expectations, and in some cases, their rapid use in high-stakes contexts may be premature.
At the same time, data governance remains uneven, as much of the protection of sensitive information depends on contractual safeguards rather than consistent legal frameworks. Beyond these issues, growing expectations around digital evidence also create practical challenges. Training investigators to detect AI-generated or manipulated content requires significant resources, and many institutions still lack the capacity to meet these demands.
The Central Role of Human Judgment
Despite these developments, one principle remains constant: AI can enhance efficiency and assist in managing complex information, but it cannot replace human judgment.
Criminal justice decisions require interpretation, context, and accountability–elements that remain, by their nature, fundamentally human responsibilities.
Moving Forward with Caution
AI is now embedded in criminal justice systems, and the challenge is no longer whether to use it, but how to do so responsibly.
In this context, several priorities emerge:
- developing clear institutional policies
- ensuring transparency in AI use
- maintaining human oversight
- addressing bias and inequality
- protecting constitutional rights and confidentiality
While AI offers powerful tools, it also introduces new risks. Ultimately, its impact will depend not only on technological progress, but on how carefully it is integrated into existing legal frameworks.
Learn More
For those interested in learning more, the full webinar recording and additional resources are available here:
📌 Webinar Recording and Resources: AI in Criminal Cases: The Courts’ Role in Preserving Constitutional Rights
📌 Presentation Resources: Resource Folder
For more details on AI applications in legal assistance, visit:
🌍 NCSC AI Initiative
This content has been updated on 04/13/2026 at 16 h 28 min.
