Trust but Verify: ChatGPT is a tool, but not a replacement, for human legal research

Trust but Verify: ChatGPT is a tool, but not a replacement, for human legal research

1024 678 Cynthia Conlin

As an attorney, I’m always looking for ways to streamline my practice and make the most of my time. Since last year, I have been utilizing ChatGPT, as well as Westlaw and LexisNexis’s AI models, to assist with various tasks, including summarizing cases, streamlining legal arguments, and proofreading. With its ability to process vast amounts of information in seconds as well as proofread and rewrite, ChatGPT has become a valuable tool in my arsenal. It is not, however, without its flaws.

Recently, I uploaded multiple defamation cases that I had downloaded from Westlaw to ChatGPT and asked for it to summarize aspects of them involving certain affirmative defenses. One was Falic v. Legg Mason Wood Walker, Inc., a 2004 defamation case involving two executives who sued their employer. When ChatGPT returned its initial summary, it looked impressive:

In Falic v. Legg Mason Wood Walker, Inc., the court addressed whether statements made in an internal memorandum were protected by qualified privilege. The memorandum noted that the executives had a ‘history of restructurings, write-offs, bankruptcies, falling stock prices, dubious inter-family business transactions, and extremely generous salaries coming out of marginal or failing public entities.’ The court found that the statements were protected under qualified privilege because they were made in good faith during an internal investigation—a context where the company had a legitimate interest in discussing the executives’ conduct and performance.

At first glance, the summary seemed accurate. However, as a diligent attorney, I knew I had to verify its accuracy. When I reviewed the case myself, I discovered that ChatGPT’s summary was the opposite of the actual holding.

In Falic, the court, did not, in fact, find that the statements were protected under qualified privilege. Rather, it held that qualified privilege did not apply, as the statements had been disseminated too broadly, which negated the relationship necessary to invoke such a privilege. The court’s exact words were, “Accordingly, the Court holds that Defendant’s allegedly defamatory statements are not protected by a qualified privilege.”

This discrepancy was significant and underscores a crucial point: While ChatGPT can be an incredibly helpful tool for quickly processing and summarizing information, it is not infallible. Its interpretations and summaries must always be double-checked against the original sources.

Moreover, ChatGPT should not be considered a substitute for professional legal advice. This is especially important for pro se litigants—those representing themselves in legal matters—who might be tempted to rely on ChatGPT to draft legal arguments or court filings. While the AI can provide general information and assist with research, its output can often be incorrect or misleading, which, in a legal context, can potentially lead to serious consequences.

As another example, I once asked ChatGPT to create a legal argument for a motion to dismiss based on the statute of limitations. The draft it produced seemed well-structured and cited relevant case law. However, upon closer inspection, I noticed that the AI had misinterpreted the applicable statute, incorrectly calculating the limitations period. If I had used that argument in court without reviewing and correcting it, it could have resulted in a denial, or, worse — sanctions or a Bar violation.

Despite these cautionary tales (and, if you hang out among lawyer circles, you will hear many), I remain optimistic about the use of AI tools like ChatGPT in the legal field. Here are a few reasons why:

Efficiency: ChatGPT can process and summarize large volumes of textual content much faster than a human can. This can be particularly useful for preliminary research and getting a broad overview of relevant cases. The quicker I can conduct research, the more savings I pass on to my clients.

Accessibility: AI, particularly Westlaw and LexisNexis’s models, provides access to information and legal precedents that might otherwise be time-consuming to locate and review manually.

Supplementary Support: ChatGPT can serve as a valuable assistant, helping legal professionals manage their workload more effectively and focus on more complex, analytical tasks.

Each AI tool that I use —Westlaw AI, Lexis AI, and ChatGPT — has its strengths and weaknesses, but, by using all three, I can pose the same legal question to each and compare the different answers. This multi-faceted approach helps me cross-check information and ensure accuracy.

Additionally, the more you use ChatGPT, the better it can adapt to your specific needs, refining its outputs based on your feedback and corrections.

The key takeaway from my experience is that ChatGPT should be used as a supplementary tool rather than a sole source of information. It is invaluable for initial research and quick summaries but must be used with the understanding that its outputs need verification. As with any tool, its effectiveness is maximized when combined with human expertise and critical thinking.

In the end, while AI can significantly enhance our capabilities, it is up to the user to ensure the accuracy and reliability of the information it outputs. By always double-checking and corroborating AI-generated data, we can harness the full potential of tools like ChatGPT while mitigating the risks of errors and inaccuracies.

So, the next time you use ChatGPT for legal research, remember: trust, but verify.

Cynthia Conlin

Cynthia Conlin is the lead attorney at the Law Office of Cynthia Conlin, P.A., an Orlando law firm focusing on assisting businesses and individuals with litigation needs.

All stories by:Cynthia Conlin

Leave a Reply