menu
close

Claude AI's Legal Citation Blunder Lands Anthropic in Hot Water

Anthropic's law firm Latham & Watkins has admitted that its attorney used Claude AI to generate a citation in a music publishers' copyright lawsuit, resulting in a hallucinated reference with fake authors and title. The incident occurred when attorney Ivana Dukanovic used Anthropic's own AI to format a citation for a legitimate academic article, but failed to catch the errors during review. U.S. Magistrate Judge Susan van Keulen called the situation 'a very serious and grave issue,' highlighting growing concerns about AI reliability in legal contexts.
Claude AI's Legal Citation Blunder Lands Anthropic in Hot Water

In an ironic twist that underscores the challenges of AI adoption in professional settings, Anthropic's own Claude AI has created problems for the company in court.

On Thursday, May 15, 2025, attorney Ivana Dukanovic from Latham & Watkins formally apologized to a Northern California federal court after using Claude to generate a legal citation that contained fabricated information. The hallucinated citation appeared in a declaration from Anthropic data scientist Olivia Chen, who was serving as an expert witness in the company's ongoing copyright battle with music publishers.

The lawsuit, filed by Universal Music Group, Concord, and ABKCO, alleges that Anthropic misused copyrighted song lyrics to train its Claude AI model. The publishers claim the AI was trained on lyrics from at least 500 songs by artists including Beyoncé, the Rolling Stones, and The Beach Boys without proper authorization.

According to court documents, Dukanovic asked Claude to format a citation for a legitimate academic journal article from The American Statistician that Chen had referenced. While Claude provided the correct publication title, year, and link, it invented fake authors and an inaccurate title. The attorney's 'manual citation check' failed to catch these errors before submission.

U.S. Magistrate Judge Susan van Keulen expressed serious concern about the incident, noting there is 'a world of difference between a missed citation and a hallucination generated by AI.' In response, Latham & Watkins has implemented 'multiple levels of additional review' to prevent similar occurrences.

This case joins a growing list of AI hallucination incidents in legal proceedings. Earlier this month, a California judge sanctioned two law firms for submitting 'bogus AI-generated research,' ordering them to pay $31,100 in legal fees. In another recent case, an attorney was fired after using ChatGPT to generate fake legal citations. Legal experts warn that AI tools may be useful for brainstorming but cannot replace traditional legal research and verification processes.

As AI adoption accelerates across professional fields, this incident serves as a cautionary tale about the technology's limitations and the critical importance of human oversight, particularly in high-stakes environments like courtrooms where accuracy and credibility are paramount.

Source:

Latest News