A federal judge in New York has ruled that information created by artificial intelligence chatbots can be used in court as evidence, raising legal concerns about how people use tools like ChatGPT, Gemini and Claude.
Attorney Rory Safir, with the Safir Injury and Criminal Defense Law, said AI apps may be capable of answering complex questions, but they should not be treated like lawyers.
“Lawyers read the fine print, but a lot of folks maybe don’t,” Safir said.
The ruling came in the case of a former CEO who had already been indicted on fraud charges. Prosecutors said the defendant used the AI chatbot Claude to type questions about the charges prosecutors would pursue and about a defense plan.
“You’re still not chatting with an attorney, so there’s no privilege there,” Safir said.
Because AI chatbots are publicly accessible platforms, Safir said the research is treated the same as evidence collected from texts, emails or internet searches. That information can be used to prove intent or premeditation.
“If you wouldn’t shout it out in public, be careful with what you’re putting in there,” Safir said.
Safir said the outcome could have been different under other circumstances.
“Had it been some kind of proprietary AI, like some law firms might have, like their in-house AI or something, where there’s a reasonable expectation of confidentiality, that could have been OK,” Safir said.
He said it is not yet common practice for attorneys to direct clients to use a law firm’s own AI program.
Anyone who has used an AI chatbot to research legal questions and is facing court proceedings should disclose those searches to their attorney rather than conceal them, Safir said. Informing legal counsel immediately gives attorneys the opportunity to address the searches as part of case preparation.
Originally written by: Casey Torres
Source: AZ Family
Published on: 6 March 2026
Link to original article: AI chatbot searches can be used as evidence in court, attorney says