May 13th, 2026

Why Using AI For Your School's Legal Work Is A Terrible, Horrible, No Good, Very Bad Idea

The news and social media are so saturated with stories about artificial intelligence (AI) that another piece on the topic may seem one straw too many for the camel to bear. We are inundated on a daily basis with accounts that AI will improve efficiency, enhance productivity, replace human workers, and/or bring about the end of humanity. School districts, however, must be cautioned about the unique risks that AI poses when relied upon for legal advice. This article will highlight two serious problems with using publicly-available AI to perform research legal issues or generate documents in litigation: so-called “hallucinations,” and lack of privilege.

The existence of AI “hallucinations” has been recognized for several years. Put in simple terms, AI functions are constructed to supply the requestor with what they sense the requestor wants – sometimes to a fault. Thus, for example, if asked whether the law permits a particular action, an AI will provide such legal support as it can find in support of that action. It often overlooks or disregards legal authority that is contrary to the action, even if that contrary legal authority may present the legally correct response. It is also known to mistakenly interpret proposed legislation for actual law. Even worse, if insufficient legal support for the proposition exists, the AI will simply make something up! This is not a rare occurrence – an increasing number of lawyers have been sanctioned for submitting materials in litigation that turned out to be entirely AI-generated and which cited cases that do not exist at all.

In one well-publicized case from downstate New York, Mata v. Avianca, Inc., the lawyers for a plaintiff suing an airline submitted an AI-generated brief to the court, only for the airline’s lawyers to point out that the six primary cases relied upon in the brief were all imaginary. They also noted that other cases cited by the brief, though real, did not stand for the propositions cited – that is, the AI had taken real cases and assigned to them the meaning it desired to answer the inquiry, without regard to what they actually said. The lawyers in the Mata case were fined $5,000 for their actions. In another case, a lawyer from Texas who submitted briefs with AI hallucinations in a federal lawsuit in Indiana was initially fined $15,000, although the fine was later reduced to $6,000. These examples involving lawyers who were persuaded by the seemingly plausible legal analysis presented by AIs should serve as a warning for non-attorneys in particular as they lack legal training and experience, which presumably would make them even more vulnerable to mistaken reliance on faulty AI legal analysis.

While AI can be a powerful tool for streamlining the initial hunt for relevant legal authority, it is up to the user to verify the information it provides – and that is something that relies heavily on the special expertise of a lawyer. Responsible attorneys will use authoritative sources with which they are uniquely familiar to verify not only the existence, but the substance, of the caselaw and arguments provided by AI functions.

Another defect of public AI tools, however, cannot be remedied even when exceptional care is used, as a recent court decision illustrates. In U.S. v. Heppner, another case from downstate New York, a man accused of criminal corporate fraud – and facing a grand jury indictment – used a public AI platform to generate outlines of his defense strategy and potential legal arguments. In doing so, he inputted information he had learned from his attorneys, and he shared the results with his attorneys. The government sought to obtain the AI conversations in the course of the criminal proceedings, and the man and his attorneys objected on the basis of attorney-client privilege. That is the privilege by which communications between a party and his, her, or its counsel are placed beyond the reach of the opponent, ensuring that a party can be forthright with the attorney and communicate without fear that the communications will be reviewed by hostile forces in the future.

The court in that case ruled that the conversations with the AI platform were not covered by attorney-client privilege. Partly this was because it could not reasonably be said that the AI platform was the man’s attorney at all. Nor could it be regarded as simply a tool in the attorney-client relationship, because the information collected by the AI was not confidential, but subject to a company policy that it could be shared with a host of third parties, including government regulators. The government thus was able to obtain the AI conversations, including all the information about the man’s defense strategy, the information he had inputted about his communications with his attorneys, and any admissions he included in the mix.

In short, conversations with public AI platforms like ChatGPT can be expected to be “discoverable” – that is, subject to being obtained by an opposing party in litigation, or potentially by parties in litigation where the school district is merely a bystander. (There are AI platforms which, unlike ChatGPT, do not share information outside the organization using them, and it is generally viewed as safe for their use within law firms.) The information that can be obtained by reviewing AI conversations used to generate legal advice is potentially of high value to an adversary. As a result, it is not a good idea to use public AI platforms to investigate legal issues, whether litigation is underway or not, as doing so may expose your school to liability.

Again, AI can be a helpful tool when used correctly – but using public AI tools to generate legal advice is probably something that should not be given serious consideration.

attorney

Charles "Chris" Spagnoli

View Attorney Profile