Washington Lawyer May/June 2026
By John Murph
In October 2025, two federal judges acknowledged that members of their staff had used artificial intelligence to draft court orders that contained factual errors, including misidentified individuals and incorrect quotations of state law. Responding to an inquiry by U.S. Senate Judiciary Committee Chair Chuck Grassley, U.S. District Judge Henry Wingate in Mississippi and U.S. District Judge Julien Xavier Neals in New Jersey said the drafts in the unrelated cases did not go through their chambers' typical review process before they were issued.
AI Hallucination Cases, a database compiled and maintained by French lawyer Damien Charlotin, tracks decisions in cases where generative AI produced hallucinated content. As of March 2026, he has documented 808 cases in the United States.
Recognizing that AI is rapidly integrating into nearly every sector of society, including court systems across the country, the D.C. Courts decided to form the Artificial Intelligence Task Force by administrative order in March 2024. The task force is charged with exploring ways to embrace AI's innovative solutions in reasonable, ethical ways while building guardrails to prevent harm to the D.C. public.
"One of the big things that we, as the court system, had to recognize is that AI is here," says D.C. Superior Court Chief Judge Milton C. Lee Jr. "The people who come to the courthouse, whether they are lawyers or [not], use it quite a bit. We needed to get ahead of the curve on this."
Shaping a Use Policy
The seeds for the task force were planted sometime between 2023 and 2024 when D.C. Court of Appeals Chief Judge Anna Blackburne-Rigsby served as president of the Conference of Chief Justices and chaired several national task forces, including some focused on AI.
The D.C. Courts' AI task force's first output, created in collaboration with the National Center for State Courts, was the AI Strategic Planning Roadmap. Released in June 2025, the roadmap includes guiding principles, a governance structure with five dedicated committees, and an internal AI Use Policy.
"We came up with a phenomenal use policy, which is a model that other courts are also using," Blackburne-Rigsby says. "I think our multidisciplinary approach, [in which] we were fortunate to get the consultation services from the National Center for State Courts and their experts, helped us get our task force going."
Released in July 2025, the policy defines generative AI and large language models (LLMs), sequestered and nonsequestered systems, and confidential court information. More importantly, the policy explicitly states that "[u]sers may not delegate decision-making responsibilities to any AI tool. Generative AI tools are intended to support (or aid) decision making and are not a substitute for judicial, legal or other professional judgment or expertise."
"When it comes down to decision-making in cases, this is not an AI effort," Lee says. "Judges are making those decisions. We have learned lessons from other places across the country [that] we have to be very careful about what AI does for us and [what] it doesn't."
A guide for court support staff lists permissible AI applications in court, including legal research, document review and analysis, transcription and translation services, use of virtual assistants, and predictive analysis for court workloads or case outcomes as well as risk assessments for recidivism.
According to the task force, AI usage at the D.C. Courts is guided by three main principles. First, AI should support the core mission of the courts by facilitating peaceful, fair, and timely case resolution. Second, AI should be leveraged to improve access to court services for all, particularly underserved communities. And third, the courts must follow the highest ethical standards when using AI, ensuring the integrity of court processes, accuracy of data, protection of confidentiality, and preservation of public trust and confidence.
"A huge focus of our AI policy centers around security, protecting confidential information, and trying to ensure that we are really engaging in best use practices," says D.C. Court of Appeals Judge John P. Howard III, co-chair of the task force.
The policy emphasizes the importance of human oversight, mindfulness regarding information confidentiality and data protection, and mandatory court-offered AI trainings. "Any use of AI is the responsibility of the user in all aspects, from entering data and instructions (prompts) to supporting or promoting AI-generated content. The user is responsible for reviewing and ensuring the accuracy and dependability of AI-derived work product," the policy states.
Streamlining Internal Operations
Currently the D.C. Courts are leveraging AI technology for administrative functions, such as helping with scheduling and managing tasks, reducing repetitive manual work so court staff can focus on more important tasks, helping process payments more quickly and accurately, revealing patterns or trends of the types of cases being filed, and hiding private or sensitive information in documents automatically.
D.C. Courts Executive Director Herbert Rouson Jr., a member of the task force, says the courts had already been using AI before the arrival of ChatGPT. "Within our budget and finance division, a number of years ago, we implemented Robotic Process Automation, a bot … that does the work of some of the account reconciliations," he says.
"In that respect, it has freed up staff to do some of the more valuable analytics associated with accounts," Rouson adds. "That product has then been scalable across other divisions [for tasks] such as reviewing contracts. It's now being scaled [across] our human resources division to look at ways to more effectively and efficiently deliver service to our court stakeholders."
Another AI tool the D.C. Courts were using before 2024 is ServiceNow, which facilitates better onboarding and offboarding of employees in the human resources department. "ServiceNow enables staff to ask specific questions, then the tool will generate a response without necessarily having to engage with a human resource staff member," Rouson explains.
Howard mentions that the D.C. Courts launched its first LLM and an internal chatbot this year with the intent of letting "D.C. Courts users internally be able to access policies at their fingertips and have [that chatbot] work to make things more efficient." For security reasons, Howard declined to disclose the exact LLM system being used by the D.C. Courts.
Exploring the ways AI can enhance the courts' case management system is another top priority, according to Rouson. The task force is looking to achieve greater efficiency in terms of docketing, receiving, and filing cases, as well as scheduling them. Used effectively, AI can help the D.C. Courts reduce case backlogs, according to the task force report. A redesign of the D.C. Courts' website is also forthcoming, with the task force testing a chatbot for a better search and navigation experience for web users in the future.
Monitoring AI in the Courtroom
Advances in AI technology have unfortunately led to the creation of nefarious synthetic media such as deepfake audio and videos, which can appear deceptively real to untrained eyes and ears. If undetected, deepfakes introduced as evidence could mislead courts and compromise judicial decisions.
"There has always been a capacity to fake and alter evidence," says Blackburne-Rigsby. "AI makes it infinitely easier to do so and makes the fakes a lot better. I think a lot of the tools that trial judges, who determine what evidence comes into court, use [will require] rigorous inquiry about where the evidence came from, what the chain of custody was, how it was made. Those are the kinds of questions that judges are now going to have to be extra sure they put on the record."
Even though several software companies claim to have the capacity to detect fraudulent AI-generated content, Blackburne-Rigsby hasn't bought into those promises. "Some of that, I think, is marketing and financially motivated," she says. "Nothing is really going to be a substitute for what trial judges do and always have done very well: asking the right questions and being knowledgeable. There are legal hurdles to mount before you can get any evidence admitted. Those hurdles are there to try to ensure that the evidence is as authentic as possible."
D.C. Superior Court Judge Donald Tunnage, co-chair of the AI task force, says that the D.C. Courts have increased what is permissible in evidentiary discovery. "Now, we are getting a lot more requests for internal or digital examinations of documents," he says. "More attorneys are saying, 'I want to look at your computer and images to see how [they were] created and when.'"
Chief Judge Lee says the courts are "acutely aware of the challenges that AI brings, including the authentication of evidence and the admissibility of evidence, [which] still are judge-driven functions."
"As technology grows, we're going to have to grow with it and try to stay in front of it," Lee says. "We also have to rely on the fact that our lawyers are bound by ethical considerations that we expect them to honor."
Rouson, who was recently appointed to the Joint Technology Committee of the National Center for State Courts, notes that the D.C. Courts aren't alone in grappling with issues regarding deepfakes. "That committee is looking at the same kinds of issues that we are talking about and is developing whitepapers that study the issues across a number of jurisdictions," he says. "That helps inform the decisions we make in the D.C. Courts."
In terms of guardrails, Rouson says the D.C. Courts are staying at the forefront, with other jurisdictions such as Maryland, Virginia, New York, and Georgia reaching out to examine how the courts structured their internal use policy.
"As much as we are a national leader … we are also humble enough to know what we do not know," Rouson says. "That's why participation in some of these national organizations such as the Conference of Chief Justices helps us expand the aperture through which we are evaluating how we, too, effectively utilize artificial intelligence moving forward in a safe and effective way."
D.C. Bar staff writer John Murph has received three Luminary Awards from the National Association for Bar Professionals for his feature articles. Reach him at [email protected]. D.C. Bar member Tara Vassefi, an attorney with Solidarity Law Cooperative, contributed to this article.