A Utah attorney has been officially sanctioned by the state’s Court of Appeals after submitting a legal brief containing a fabricated court case citation—generated by the AI chatbot ChatGPT.
According to court documents reported by ABC4, attorney Richard Bednar filed a petition citing a case titled Royer v. Nelson, which upon review, was found to be nonexistent in any legal database. The case was apparently created by ChatGPT and mistakenly included as legitimate legal precedent.
In its ruling, the Utah Court of Appeals acknowledged the growing use of AI in legal research but made clear that it does not absolve attorneys of their ethical responsibilities.
The petition was jointly filed by Bednar and fellow attorney Douglas Durbano in a bid for an interlocutory appeal. However, it later emerged that Durbano had no role in preparing the document. Instead, the brief was authored by an unlicensed law clerk—who has since been fired—using generative AI tools.
During an April hearing, Bednar admitted to the error, apologized, and accepted full responsibility. He also offered to reimburse the client and pay the opposing party’s legal costs.
As part of the sanctions, the court ordered Bednar to:
- Pay the respondent’s legal fees related to the flawed petition and court hearing,
- Refund his client for time spent on the defective filing,
- Donate $1,000 to the Utah nonprofit legal aid organization And Justice for All.
The incident adds to growing concerns over the unchecked use of AI in legal proceedings. It mirrors a similar case last year in New York, where two lawyers faced penalties for including fictitious ChatGPT-generated cases in a court brief.