Judges Acknowledge Using AI Following Criticism of Fabricated Rulings

ago 3 hours
Judges Acknowledge Using AI Following Criticism of Fabricated Rulings

Two federal judges have recently acknowledged the unintended consequences of utilizing artificial intelligence (AI) tools in their decision-making processes. This development has heightened concerns regarding the accountability of judicial rulings and the reliance on AI for legal research.

Judges Address AI Challenges in Rulings

US District Judges Julien Neals from New Jersey and Henry Wingate from Mississippi sent letters to the Administrative Office of the US Courts on October 20 and 21. These letters were a response to inquiries from Senate Judiciary Committee Chairman Chuck Grassley. Both judges admitted the use of AI played a significant role in erroneous legal orders.

Errors Due to AI Tools

  • Judge Neals highlighted that a law school intern misused ChatGPT for legal research, leading to a flawed order dated June 30.
  • Judge Wingate noted that his law clerk utilized an AI tool, Perplexity, which resulted in incorrect references in a temporary restraining order issued on July 20.

neither judged verified the AI-generated content adequately. This oversight prompted both judges to reassess their policies regarding AI use in their chambers.

Judicial Accountability and Oversight

Legal experts emphasize the critical responsibility judges have over the validity of the citations in their rulings, regardless of the source. Stephen Gillers, a professor at New York University School of Law, remarked that judges must personally review the cases they reference, regardless of whether they are sourced from AI or traditional research methods.

Bruce Green, a law professor, raised concerns over the frequency at which judges may be submitting unvetted drafts, questioning the inherent risks of drafting opinions without thorough review.

Policy Changes in Judicial Chambers

In light of the recent mistakes, Judge Wingate has mandated that all draft decisions should undergo independent scrutiny by an additional law clerk before submission. He also clarified that all citations must be accompanied by printed copies of referenced cases.

Judge Neals has taken measures to formalize his opposition to AI in legal research by documenting his policy against its use in drafting orders, which was previously communicated informally.

Senate Inquiry into AI Usage

Senator Chuck Grassley initiated an investigation following the judges’ decisions to rescind and amend erroneous rulings. His inquiry highlights the need for clear guidelines about AI within the judiciary to uphold integrity and accuracy in legal proceedings.

Grassley stated, “The judicial branch needs to develop more decisive, meaningful, and permanent AI policies and guidelines.” He cautioned against any negligence or over-reliance on AI, emphasizing the importance of maintaining the judiciary’s commitment to factual accuracy.

Interim Recommendations for AI Use

In a related letter, Robert Conrad, the director of the Administrative Office of the US Courts, provided interim guidance from a task force regarding AI applications. This guidance advises against delegating pivotal judicial functions to AI, particularly regarding new and complex legal issues.

Moreover, it stresses the necessity for judiciary members to verify AI-generated outputs independently to ensure accountability in all judicial work. This reflects a growing recognition of AI’s potential risks when not properly managed in legal contexts.