Former Trump attorney accuses AI of generating false references in legal filings

This incident isn't the first involving attorneys relying on AI-generated data only to discover inaccuracies
The image shows a person using Google Bard on phone. — Unsplash
The image shows a person using Google Bard on phone. — Unsplash

Former Donald Trump attorney, Michael Cohen, reportedly admitted to unintentionally presenting his lawyer with erroneous legal references generated by Google Bard, an artificial intelligence (AI) chatbot.

In a recent legal submission ahead of his role as a witness in Trump's impending criminal trials, Cohen acknowledged dispatching legal citations generated by Google Bard to his lawyer, David Schwartz, in support of his case.

"The inaccurate citations in question, alongside numerous others uncovered by Mr Cohen but unused in the motion, were the product of Google Bard. Cohen mistakenly perceived this service as an amped-up search engine, not recognising it as a generative AI tool like Chat-GPT,” stated the court filing in the case United States v Michael Cohen, as reported by Reuters.

However, it was contended that Cohen, not actively practising law, was merely relaying information to his attorney, implying the necessity for the information to be vetted before its inclusion in official court documentation.

“Mr Cohen is not engaged in legal practice and lacks awareness of the risks associated with using AI services for legal research, with no ethical obligation to validate the accuracy of his research,” the statement reiterated, emphasising the necessity for further scrutiny.

This incident isn't the first involving attorneys relying on AI-generated data only to discover inaccuracies.

Previously this year, attorney Steven Schwartz from the New York law firm Levidow, Levidow & Oberman faced backlash for using AI in producing factually incorrect court references.