Barrister Accused of Using AI for Hearing Preparation After Citing Nonexistent Cases

ago 3 days
Barrister Accused of Using AI for Hearing Preparation After Citing Nonexistent Cases

An immigration barrister named Chowdhury Rahman has come under scrutiny after a tribunal judge found him using artificial intelligence tools improperly. Rahman cited cases during a tribunal hearing that were either “entirely fictitious” or “wholly irrelevant.” His use of AI, similar to ChatGPT, was highlighted in a recent ruling by Judge Mark Blundell.

Key Findings of the Tribunal

The case originated from the asylum claims of two sisters from Honduras, aged 29 and 35. They argued that they faced threats from a criminal gang in their home country, and Rahman represented them. The appeal eventually escalated to the upper tribunal, where significant issues were discovered regarding the documentation submitted by Rahman.

Judge’s Observations

  • Judge Blundell noted that Rahman provided 12 authorities in his appeal documentation.
  • Many of these authorities were non-existent or irrelevant to the legal arguments presented.
  • The judge pointed out that none of the cited cases supported the legal propositions put forward by Rahman.

Concerns Over AI Usage

Blundell expressed serious concerns regarding the accuracy and reliability of Rahman’s submissions. He stated that the inaccuracies seemed to stem from generative artificial intelligence, such as ChatGPT. The judge remarked that the grounds of appeal appeared to have been drafted improperly due to reliance on AI tools.

Implications for the Barrister

Further complicating matters, Judge Blundell indicated he was considering reporting Rahman to the Bar Standards Board for his conduct. He described Rahman’s attempts to conceal his use of AI as problematic and indicative of a lack of thorough research.

Final Ruling

In his judgment, published in September, Blundell firmly dismissed Rahman’s arguments about the appeal. He stated that the barrister’s explanations did not justify the submission of fictitious case citations. Blundell underscored that the significant issues raised were not merely matters of drafting style but fundamental flaws in Rahman’s reliance on AI for preparing his legal documents.

Conclusion

This case raises critical questions about the ethical use of AI in legal settings. As technology becomes more integral in various fields, the legal profession must establish clear guidelines to maintain the integrity of legal proceedings. The actions of Chowdhury Rahman serve as a reminder of the potential pitfalls associated with improper use of artificial intelligence in critical sectors such as law.