Login | January 25, 2025
Detecting AI usage in a document
RICHARD WEINER
Technology for Lawyers
Published: January 24, 2025
With the use of chatbots in creating documents kicking into some kind of high gear lately, you have a pretty good chance of reading a legal document that has some AI content in it.
This could happen either when someone on the other side writes a brief or when, despite your best efforts, someone in your office is using Chat GPT to shortcut creating a brief or memorandum for you.
Using generative AI to create legal documents presents a set of problems that do not really exist in other fields of endeavor—namely, the chatbot creating legal text, cases and principles that actually do not exist. These creations are called, as we know, and famously, “hallucinations.” All kinds of attorneys have gotten into all kinds of trouble with all kinds of courts by filing documents that have these hallucinations. Producing inaccurate citations and even ghost cases present a real problem in the law.
The real problem isn’t the product. The real problem is in using it in the first place.
Chatbots write in English (or whatever language). The documents they produce can seem normal. But maybe they aren’t quite normal. If you know that AI is being used in a legal document, you would be put on alert that that doc may contain some hallucinatory content.
And, in fact, people who train AI chatbots, especially in law, know how to detect AI. And if you know how, then you can screen documents for this nonsense and alert a judge (or HR, maybe).
So here are key words that AI uses and that, if you see them being used weirdly, can tip you off to its use.
In short, look for these:
Crucial. This is actually the big one, especially if it’s used more than once. Once you see it, you’ll get it.
• Delves
• Showcasing
• Underscores
• Intricate
• Pivotal
• Firstly
• Secondly
• In conclusion
Also, look for the overuse of certain words in starting paragraphs, like “However,” “Moreover,” “Furthermore,” and “In addition.”
It’s not really that nobody can use generative AI. It’s more that these words, and writing that appears to be written by a robot, can tip you off to the possibility of hallucinations in citations.
Well anyway, good luck with all of this. It takes practice, but you can do it. Or a trained editor can.
H/t to David M Lester at Atkinson Andelson Loya Ruud & Romo.