
Image générée par ChatGPT
Artificial intelligence is no longer just a topic of discussion for courts. It is already being used in concrete ways by judges, raising both opportunities and important questions. One question, however, remains central: how exactly are judges using it in practice?
These developments were at the center of a recent webinar on the use of generative AI in judicial work. The session is part of the AI and Courts series, co-hosted by the National Center for State Courts and the Thomson Reuters Institute. The series brings together judges, researchers, and legal professionals to examine how AI is reshaping judicial work.
Until recently, much of the conversation around AI in courts remained based on impressions and isolated examples. There was a clear need to move beyond anecdotal discussions and better understand what judges are actually doing in practice.
Rather than focusing only on theory, this discussion explored how these tools are already being integrated into daily judicial work. Throughout the session, one idea consistently stood out: AI can support judges, but it cannot replace their judgment.
Moving Beyond Anecdotes
For a long time, conversations about AI in courts were based mostly on impressions and isolated examples. There was a sense that judges were experimenting with these tools, but there was very little concrete information about how they were actually using them in practice.
To address this gap, recent research has begun to look more closely at early uses of generative AI in courts, including interviews with a small group of judges across different jurisdictions identified as early adopters. The objective was simply to move beyond anecdotal discussions and better understand how these tools are being used in practice.
The interviews focused on a few practical questions:
- what benefits judges are seeing
- what risks they are concerned about
- which use cases are the most useful
- what kind of training is needed
Although limited in scope, this type of work offers a valuable snapshot of a system in transition.
What Judicial Use Looks Like in Practice
In practice, these uses are both simple and highly structured.
On one level, AI assists with familiar tasks. It can help summarize long transcripts, organize legal research, or clarify technical concepts before turning to traditional databases. These uses save time, but they do not fundamentally change the role of the judge.
On another level, however, more sophisticated practices are emerging.
Some judges are creating controlled AI environments by uploading procedural rules, local regulations, and internal guidelines into a system. They then impose strict instructions: answers must be grounded in those materials, sources must be cited, and uncertainty must be explicitly acknowledged. In this way, AI becomes less of a generative tool and more of a structured assistant operating within defined boundaries.
More specifically, AI is also being used to prepare for highly specific judicial tasks. For example, before hearing expert testimony in a technical field, judges may use AI tools to obtain a quick “primer” on unfamiliar industry terminology. This allows them to better understand the issues at stake and to ask more precise and relevant questions during the hearing.
Beyond adjudication, AI is also being used to support administrative responsibilities. Judges participating in internal or committee meetings can record discussions, generate transcripts, and use AI systems to automatically produce meeting minutes and extract action items assigned to specific participants. These uses illustrate how AI is streamlining not only legal reasoning, but also the broader organizational work of courts.
In addition, some judges are beginning to use generative AI tools to support teaching and public speaking activities, for instance by creating presentation materials or generating visual aids. This further illustrates how these tools extend beyond adjudication into everyday professional tasks.
At the same time, AI is also reshaping how legal information is accessed and consumed. Long academic articles, for instance, can now be transformed into short audio summaries. Instead of reading fifty pages at a desk, legal professionals can engage with complex material during a commute or between hearings. The substance remains the same, but the mode of access changes significantly.
| Type of Use | What Judges Do | Purpose / Effect |
|---|---|---|
| Familiar tasks | Summarize transcripts, organize legal research, clarify technical concepts | Saves time without changing the judicial role |
| Controlled environments | Upload rules, regulations, and guidelines; impose strict instructions (sources, uncertainty, grounding) | Turns AI into a structured assistant within defined boundaries |
| Case preparation | Use AI to obtain a primer on technical terminology before hearings | Improves understanding and supports more precise questioning |
| Administrative tasks | Record meetings, generate transcripts, produce minutes, extract action items | Streamlines organizational and administrative work |
| Teaching and communication | Create presentation materials and visual aids | Supports teaching and public speaking activities |
| Access to legal information | Convert long academic articles into audio summaries | Changes how legal information is accessed and consumed |
Learning by Experimenting First
One of the most striking insights is that this learning process rarely begins in court.
Instead, it starts in low-risk environments. Judges experiment with AI while preparing presentations, organizing teaching materials, or simply exploring how prompts influence outputs. Over time, this helps build a practical understanding of both the capabilities and the limits of these systems.
One particularly useful technique has emerged from these early uses, as some practitioners rely on one AI system to improve their interaction with another. For instance, a user may ask one tool to generate a highly precise and structured prompt, which is then used in another system to obtain more reliable and targeted results.
This gradual approach is important, as it allows users to develop intuition before integrating AI into more sensitive contexts, where accuracy, confidentiality, and responsibility are critical.
A Space for Judges to Learn Together
As these practices develop, new forms of collective learning are also emerging.
One notable initiative is the Judicial AI Consortium (JAKE), a secure platform created specifically for judges. Designed as a “by judges, for judges” environment, it allows members to exchange experiences, discuss challenges, and share practical insights without the pressure of public scrutiny.
This type of space plays a crucial role. In a field where formal guidance is still evolving, peer discussion becomes one of the most effective ways to build knowledge and confidence.
The Human Side of AI in Courts
While efficiency is often the focus of AI discussions, a more unexpected benefit is beginning to emerge.
By taking over time-consuming administrative tasks, AI may allow judges to focus more on the human dimension of their role. Instead of being absorbed by drafting, formatting, or document review, they can dedicate more time to interacting with litigants, explaining decisions, and ensuring that individuals feel heard.
More fundamentally, this dynamic has been described as a form of “unlock” for access to justice, as reducing the time spent on administrative drafting can free up judicial time and make it more feasible for judges to engage directly with self-represented litigants and explain decisions in a more meaningful way.
This shift is also reflected in how judicial decisions are communicated. Some early uses of AI involve translating complex, jargon-heavy rulings into plain language, sometimes at a sixth or eighth-grade reading level. This makes decisions more accessible to self-represented litigants and the general public, helping individuals understand not only the outcome, but also the reasoning behind it.
In this sense, technology does not distance judges from people. It can, in some ways, bring them closer.
The Real Challenge Is Not What You Think
When discussing risks, it is tempting to focus on the most visible problems, such as hallucinated cases or incorrect citations.
These issues are real, but they are also relatively easy to detect and correct with existing tools. A more subtle and significant issue, however, lies in bias.
Bias does not appear as an obvious error. It shapes the structure of responses, the framing of arguments, and even the default assumptions embedded in outputs. For example, when generating
images for presentations, users often need to explicitly request a “female judge with a diverse background.” Without that prompt, the system will almost always default to an older male figure.
This influence can also extend to strictly legal reasoning. For instance, when asked to generate a legal framework in employment discrimination law, an AI system may unintentionally structure its response in a way that favors the defense. This is not necessarily due to intentional design, but rather to the disproportionate availability of defense-oriented legal commentary published online. As a result, certain perspectives may be overrepresented unless users actively seek balance.
Addressing this type of bias requires active intervention. It must be anticipated, questioned, and corrected through deliberate prompting and critical reading.
At the same time, other issues also remain important, including confidentiality, data protection, and the growing difficulty of verifying digital evidence in an era of synthetic content and deepfakes. These technologies are advancing so rapidly that traditional methods of verification may struggle to keep pace.
There is also a broader question about how reliance on AI may affect the development of reasoning and writing skills among future legal professionals. In this context, judges emphasize a growing responsibility to guide and mentor younger lawyers and clerks in the responsible use of these tools, rather than simply prohibiting them.
A related long-term concern involves legal education. If students and young lawyers rely too heavily on AI tools, they may bypass the cognitive process that occurs during writing itself. Yet this process, which involves structuring arguments, articulating reasoning, and refining ideas, is essential to the development of legal analysis.
Finally, another practical issue relates to the potential displacement of certain roles within the judicial system. Administrative and support functions, in particular, may be more vulnerable to automation, raising broader questions about the future organization of court work.
Boundaries Are Already Taking Shape
In response to these challenges, practical safeguards are beginning to emerge across judicial contexts.
These include separating professional and personal uses of AI, avoiding the input of confidential information, and carefully selecting which tools are authorized. Some systems, such as DeepSeek, have already been restricted in certain contexts due to security concerns.
In addition, some chambers adopt strict account management practices, including prohibiting the use of personal accounts for professional tasks in order to avoid the commingling of sensitive data.
At the same time, AI is not used in isolation. Judges often rely on existing professional tools to verify and refine outputs. For example, citation-checking systems such as Westlaw’s Quick Check can be used to validate references and detect potential hallucinations, while drafting tools like BriefCatch assist in improving clarity and structure within legal writing environments.
Beyond these safeguards, a diverse ecosystem of tools is being used or tested. General-purpose systems like Claude or Gemini coexist with specialized legal tools, as well as emerging models designed specifically for judicial contexts, such as Learn Hand.
This reflects an important reality: AI is not a single tool, but an evolving landscape that requires constant evaluation.
Transparency Without Creating Fear
The question of transparency introduces a more complex tension.
On the one hand, openness about AI use can strengthen trust. On the other hand, overly rigid disclosure requirements may have unintended consequences. If rules become too strict, they may discourage responsible experimentation and push these practices into informal or hidden use.
In this context, some observers refer to the risk of “shadow use,” in which individuals continue to rely on AI tools but do so discreetly rather than openly, due to fear of sanctions or misunderstanding.
A more balanced approach now seems to be emerging, as it prioritizes responsible use supported by guidance and discussion, rather than strict formal obligations.
Thinking Beyond Current Uses
Beyond immediate applications, AI also invites a deeper reflection on how justice systems could evolve.
Technological change rarely unfolds in predictable ways. As Richard Susskind has illustrated, humans imagined walking on the moon long before it happened. What they did not anticipate, however, was that millions of people would one day watch such an event live from their homes. This highlights a broader lesson, as the most transformative effects of technology are often the ones we fail to imagine in advance.
This perspective invites a shift in thinking. AI should not only be seen as a tool to improve existing processes, but also as an opportunity to rethink them entirely.
A particularly striking example comes from Albania, where an experiment explored the use of an AI system as a public procurement officer. The underlying idea is that an automated system cannot be influenced by corruption or bribery. Whether or not such initiatives succeed in practice, they raise important questions about how certain institutional functions could be redesigned through technology.
Where to Begin
For those who have not yet engaged with these tools, the starting point is not technical expertise, but a basic understanding of how they work.
This involves exploring available resources, observing current practices, and experimenting in low-risk contexts. Over time, familiarity develops, along with a clearer sense of both their potential and their limitations.
Several concrete resources can support this learning process, including initiatives such as the Sedona Conference, particularly Working Group 13, as well as educational materials developed by the Federal Judicial Center and pilot programs led by institutions such as the American Arbitration Association.
Throughout this process, discussion remains essential, as conversations with peers often provide insights that formal training alone cannot fully replace.
Moving Forward Without Looking Down
Adopting AI requires a delicate balance. One way to understand this is through a simple analogy, like walking along a narrow ledge. The instinct is to look down and focus on the risk, but doing so can make it harder to move forward. The challenge is to remain aware of the danger without becoming paralyzed by it.
The same logic applies here. Legal professionals must remain aware of the risks, including bias, errors, and confidentiality concerns, but cannot allow those risks to prevent experimentation and progress.
AI is already part of the judicial landscape. The real question is not whether it will be used, but how thoughtfully it will shape the future of justice.
Learn More
For those interested in learning more, the full webinar recording and additional resources are available here:
📌 Webinar Recording and Resources: How Judges are Using GenAI
📌 Presentation Resources: Resource Folder
For more details on AI applications in legal assistance, visit:
🌍 NCSC AI Initiative
Ce contenu a été mis à jour le 29 avril 2026 à 11 h 08 min.
