
What is agentic AI? And why most legal AI tools aren't
January 20, 2026
Every legal AI vendor says they're "agentic" now. It's the buzzword of 2026, right up there with "intelligent" and "autonomous."
But there's a real distinction hiding underneath the marketing. It's the difference between a tool that answers your questions and a tool that does your work. Most legal AI does the former. Almost none does the latter. And the gap between those two things determines whether AI genuinely changes how you practise or just makes you marginally faster at what you were already doing.
Three generations
It helps to think in generations. Not as a ranking — each has its place — but as a way to understand what you're actually buying when someone says "AI."
Generation 1: search, but smarter. The first wave improved how solicitors find things. Instead of Boolean queries in legal databases, you could ask questions in plain English. "What case law exists on break clauses in commercial leases?" The system searches, returns results. That's genuinely useful, but the solicitor still does all the thinking. The tool finds. It doesn't reason.
Generation 2: the answer machine. Large language models — GPT-4, Claude, and their relatives — brought the ability to generate text. Summarise this contract. Draft this clause. Answer this question in fluent, confident English. Most legal AI tools available to UK solicitors today sit here.
The problem is subtle but fundamental. Each question starts from zero. Ask a generation 2 tool to review a supplier contract and you get a summary. Ask it to compare that contract with your standard terms — new task, no memory of the summary it just produced. Ask it to classify the risks — another fresh start. Three prompts, three disconnected answers. You're the one doing the coordination.
We've written about this before. The gap between a tool that sounds right and a tool that actually reasons the way a solicitor thinks is wider than most people realise.
Generation 3: agentic AI. An agentic system receives an instruction and works through it. Not one prompt, one answer — but a plan, executed step by step. Give it "review this supplier contract against our standard terms and flag the risks" and it reads the contract, identifies the clauses, retrieves your standards, compares them systematically, classifies deviations, and delivers a structured report. One instruction. One coordinated output.
The difference isn't just speed. The system is doing the work, not just answering questions about the work.
If that sounds like the difference between a calculator and an accountant — yes. Both work with numbers. One does what you tell it. The other manages the process.
Why most legal AI isn't agentic
The marketing departments love "agentic." The product rarely matches. Here's what actually separates the two.
It plans before it starts. An agentic system looks at a task, determines what steps are needed, and executes them without you driving each one. A generative assistant waits for your next prompt. If you're directing the process step by step, you're the agent — the tool is just a chatbot with a nicer interface.
It uses multiple models. Legal research requires different capabilities than document comparison, which requires different capabilities than risk classification. We use multiple specialised models because no single model is good at everything. A tool that runs one model for every task is making compromises it's not telling you about.
It remembers. An agentic system retains context — not just within a session, but across sessions. It knows what's been discussed, decided, and done on a matter. When Blokziel piloted Andri, this was the single clearest lesson: solicitors don't want to re-explain the same case every time they open the tool. Memory isn't a feature. It's a prerequisite for doing real work.
It uses tools, not just text. Agentic AI can search databases, retrieve documents, fill court forms, perform calculations. It works with the systems and sources solicitors actually use — including direct connections to government systems and procedural rules. A chatbot works with text. An agent works with your infrastructure.
It checks its own work. This is the hardest part to build and the easiest to fake. Genuine self-reflection means the system recognises when an answer isn't well-supported and goes looking for more before presenting it. Not confidence — rigour.
The test
Three questions that cut through the noise.
Can it execute a complex task in a single instruction? Not five separate prompts that you stitch together yourself, but one instruction that results in a coordinated output. "Analyse these contracts against our standard terms and deliver a risk report." If the tool can't handle that as one task, it's generation 2.
Does it show how it thinks? An agentic system shows its plan, its steps, its sources. You can follow the reasoning and push back on specific points. If you only see a final answer — no matter how fluent — you can't verify the process. In a profession where the reasoning matters as much as the conclusion, that's a problem.
Does it get better as it knows your firm? Agentic AI learns from your standards, your templates, your previous work, your preferences. Every session builds on the last. If the tool treats you like a stranger every time you log in, it's not accumulating the kind of understanding that makes it genuinely useful.
Why the difference matters in practice
This isn't abstract. LawBeam, a UK boutique with six staff, competes with international firms using agentic AI. Their founder Roshi Sharma puts it bluntly: "One solicitor can match up with 100 solicitors somewhere else because of superior technology."
That's not because of better search results. When LawBeam needed to analyse hundreds of meeting transcripts for a product quality dispute — a hundred hours of recordings — an agentic system planned the analysis, worked through the documents, and pulled out the specific sentences they needed for their evidence. Ten minutes. A generation 2 tool would have answered questions about individual transcripts, one at a time, while the solicitors did the actual analysis themselves.
The difference is between working somewhat faster and working in a fundamentally different way.
Where this is going
The legal market is early in a shift that isn't going to reverse. Firms that adopt agentic AI now build two advantages. The obvious one is efficiency — more work in less time, with higher consistency. The less obvious one is compounding: an agentic system that grows with your firm gets more valuable the longer it's used. It accumulates understanding of your standards, your preferences, your clients' patterns.
Firms that stay with generation 1 and 2 tools won't disappear. But as the gap widens between tools that answer questions and tools that do work, staying on the wrong side of that line will cost more every year.
We've been building agentic legal AI since day one. Not because it's a trending term, but because after watching how solicitors actually work — the messy, multi-step, context-heavy reality of legal practice — we couldn't see another way to build something that's genuinely useful.
Read also: the best agentic legal AI tools in 2026 compared and how our triangle of personalisation, tools and memory works.