You’re not hallucinating: ChatGPT can make your legal research and writing better, faster, and easier

We’re all on an AI journey—like it or not

We’re all somewhere on the legal AI learning curve. Some lawyers have already woven AI tools into their daily practice. Others are horrified by the idea. Whether you’re curious, cautious, or deeply skeptical, one thing is clear: AI isn’t going away. These tools will become part of legal practice—it’s just a matter of when and how we choose to adopt them.

If you’re new to legal AI, this post offers a few straightforward, low-risk ways to start using large language models (LLMs) to make your legal research and writing faster, better, and easier. I’ll focus on ChatGPT, since that’s the tool I use most, but others—like Claude.ai and Google Gemini—can be equally helpful.

First, a quick AI primer

This post focuses on generative AI—tools that produce content, usually in the form of text, in response to user prompts. These tools rely on predictive models trained on vast amounts of data to generate human-like language.

The best-known generative AI tools are large language models (LLMs), like ChatGPT These systems use “natural language processing” to simulate how humans write by predicting the next word in a sentence based on the context of the words that came before it. They’re trained on massive datasets—mostly scraped from the internet—and learn patterns in language that allow them to produce impressively coherent text.

Once trained, an LLM responds to your prompt by predicting what comes next, word by word. It doesn’t think or reason in a human sense, but it mimics those processes remarkably well. Its accuracy can be improved through clearer prompts or “prompt engineering”—a fancy way of saying that the better your instructions, the better the result.

Most LLMs offer both free and paid versions. I use ChatGPT Plus (USD $20/month), which includes access to the more advanced GPT models and features memory—so it can remember my style and preferences across chats.

Proceed with caution: key risks of using AI in legal work

There are many good reasons to remain cautious about integrating generative AI into legal practice. Here are some of the big ones:

  • Inaccuracy and hallucinations – GenAI tools can “hallucinate” – that is, fabricate case law, citations or statutes that do not exist. Even more insidiously, they may misrepresent the content of real legal authorities or provide incomplete answers.

  • Overconfidence – Accuracy concerns are compounded by the fact that ChatGPT’s legal answers often sound correct. These tools are exceptionally good at predicting fluent, confident, grammatically polished text—even when the legal reasoning is flawed or incomplete. The clean interface and instant responses can give a false sense of authority, making it easy to place too much trust in the output.

  • Information access limitations – AI tools like ChatGPT may obscure what they don’t know. For instance, they don’t have access to CanLII or other subscription-based legal databases. If you ask a legal question, you’ll get a legal-sounding answer—but it may not reflect Canadian authorities, recent jurisprudence, or even real case law.

  • Jurisdictional and temporal limitations – ChatGPT may default to American or outdated law, even when asked a question framed in Canadian terms. It may also offer summaries that reflect older jurisprudence, especially if recent cases haven’t been incorporated into the model’s training data.

  • Privacy and confidentiality concerns – Unless the AI tool is purpose-built for legal use with strong data protections (e.g., no retention, encryption, and data localization), there's no assurance that client data won't be retained or used to train models. There is also a risk that the use of AI could put privilege at risk by giving rise to potential waiver arguments.

  • High cost of legal AI tools – There are increasingly sophisticated legal-specific tools on the market (like CoCounsel, Harvey, and Lexis+ AI) that address many of the risks listed here. But they are pricey. For smaller firms, sole practitioners, or lean in-house teams, the cost may simply be too high to justify—at least for now.

  • Lack of legal judgment and critical analysis – LLMs can mimic legal analysis but lack the deeper reasoning, strategic judgment, and procedural instinct that good legal research requires. They don’t think critically, weigh conflicting authority, or understand nuance. When over-relied on, AI tools may lead to superficial research or prevent lawyers from developing creative, rigorous arguments.

  • The black box problem — One of the most challenging aspects of LLMs is their opacity. These systems operate as “black boxes”—we can describe generally how they work, but not exactly why they generate a particular response. The path from input to output isn’t visible, and there’s no explanatory chain of reasoning you can follow. This makes it difficult to assess the reliability of a given answer.

  • Loss of learning opportunities – There is also the concern that tasks traditionally delegated to students and new lawyers (like reading and summarizing huge piles of cases) will result in a loss of learning opportunities for our junior colleagues.

  • Bias — LLMs are trained on vast swaths of online text—and with that comes the biases, stereotypes, and imbalances embedded in those data sources. These models may replicate or amplify bias in subtle ways. They may also reflect dominant legal perspectives and overlook marginalized or evolving jurisprudence.

So how can we use these tools responsibly? In the next section, I’ll walk through a few low-risk, high-reward ways to start experimenting with tools like ChatGPT.

Smart and safe ways to use AI in legal research and writing

Even with all the caveats, LLMs like ChatGPT offer real value to legal writers. They won’t do all the work for you, but they can absolutely support your process. Here are a few safe, productive ways to begin.

1.        Summarizing cases

I often use ChatGPT to summarize case law. It’s not perfect—but it’s fast and helpful when used with care.

When I want a quick overview of how a statute, legal phrase, or issue has been interpreted, I’ll download the relevant cases from a trusted source (like CanLII or Lexis), upload them to ChatGPT, and provide a prompt that looks something like this:

  • Please summarize these 10 cases. Organize the summaries in chronological order. Each summary should have a section for Facts, Issues, Reasoning, and Conclusion. The the “Reasoning” section, specifically address how the court addresses [case authority/statutory provision/legal concept]. I want the heading for each summary to be the case name and citation, with the name of the judge(s) underneath.

In under ten minutes, I can generate a tailored document that gives me a solid snapshot of the law. The summaries aren’t flawless, but they’re good enough to show trends, provide context, and help me decide what to read in full.

Sometimes I use the summaries internally to orient myself before deeper research. Other times, I share them with instructing lawyers—always with a disclaimer that they’re AI-generated and unverified. Many of my instructing lawyers appreciate getting a high-level overview without paying me to read all the cases I’ve found.

I also rely on ChatGPT to help me triage long or difficult-to-read cases—you know the ones: old cases written in outdated language, or appellate decisions with multiple opinions. These quick summaries help me decide which ones are worth close reading and which I can set aside.

And when I’ve already read a case but need to refer to it in a memo, I’ll often ask ChatGPT for a quick summary to refresh my mind as to the basic facts and reasoning.

2.        Brainstorming legal concepts

ChatGPT can be an effective brainstorming partner, giving you a jumping off point for your research.

When using ChatGPT for brainstorming, always remember that it does not cite real cases reliably and may misstate legal principles. Treat its output like brainstorming notes from a colleague: helpful and useful for sparking ideas, but occasionally wrong.

Sample prompts:

  • Three employees left their company to start a competing business. They are using the company’s proprietary IP in their new business. They have also been soliciting the company’s clients. What causes of action can the company raise against the employees?

  • What causes of action might be available if a contractor gives a misleading quote?

  • Can you suggest search terms to research unjust enrichment in a joint family venture context?

  • I need to argue that a waiver of liability was not enforceable—what are some supporting points?

  • What arguments can a defendant raise in response to a breach of fiduciary duty claim?

  • What are some search terms I should try when researching waiver in the context of contract law?

And remember, never input confidential client information or details that could identify a matter.

3.        Rephrasing or clarifying your ideas

We’ve all stared at a paragraph, knowing it needs work but unsure where to start. ChatGPT can act as a second set of eyes—suggesting edits, smoothing transitions, or helping rephrase awkward ideas.

Sample prompts:

  • Here’s a rough paragraph — can you make it clearer/more persuasive?

  • Smooth out the flow of this section while keeping it formal and professional.

  • Please suggest more persuasive headings for this section of my factum.

  • Draft an introductory paragraph for this legal argument.

  • Here is my argument on causation— what’s confusing here?

Remember to scrub your writing of confidential information/details before submitting it in a prompt!

4.        Explaining legal concepts in plain language

LLMs are excellent at breaking down complex ideas into plain language. This can be helpful when writing client opinions. Since you already understand these concepts yourself, using AI to generate a plain language summary is safe, low-risk way to save time.

Sample prompts:

  • In two sentences, explain the legal concept of “betterment” in plain language.

  • Rewrite this paragraph so it is more understandable to non-lawyer audience.

  • Write a short, clear explanation of what “limitation period” means in civil litigation.

  • Explain “constructive dismissal” in plain language.

  • Rephrase this paragraph so a high school student could understand it.

5.             Adjusting and improving tone and style

Sometimes we’re too close to our writing to notice when the tone feels off. ChatGPT can help you revise for tone—making your message more empathetic, confident, clear, or audience-appropriate.

Sample prompts:

  • Re-word this email to sound more approachable while staying professional.

  • Adjust the tone of this client letter to be more empathetic and reassuring.

  • Revise this letter to remove the passive voice where possible.

  • Make this letter to opposing counsel more assertive while remaining polite.

Again, don’t include any confidential client information or identifying details in the prompt!

6.        Beating writer’s block

We tend to procrastinate on tasks that are a) boring, b) hard, and c) unstructured. Even I can admit that legal research is sometimes all three!

It’s understandable to stall when staring down a massive stack of cases— or one of those early 20th century House of Lords decisions with multiple opinions and no clear majority. And of course we procrastinate when we don’t know how to structure a long writing assignment, or improve our own first draft.

AI can help break the logjam. Use tools like ChatGPT to nudge yourself past the inertia by summarizing cases, generating ideas, and rephrasing your writing.

Final thoughts

ChatGPT isn’t going to draft the perfect factum or locate that golden precedent. But it can make you faster, clearer, and a little less lonely at the keyboard. And remember:

  • Never input confidential information into general purpose AI tools. Use anonymized facts or hypotheticals if seeking help on a real case.

  • Always verify legal information using a trusted source (CanLII, Westlaw, Lexis, etc.).

  • Learn to be a critical user of this technology. Do not trust the generated answers over your own thoughtful reading and analysis.

  • Keep your ethical duties front of mind. This guidance from the Law Society of BC is a good place to start.