
Why legal AI needs to think, not just respond
Watch a child learn to walk. They don't follow instructions. They stumble, adjust, try again. Each step teaches them something about balance, about the surface beneath their feet. It's not programming—it's problem-solving.
We've spent decades assuming intelligence could be reduced to following rules. That thinking was just if-then statements. That the messy, adaptive way humans actually solve problems could be replaced by something more orderly.
For legal AI, that assumption is a dead end.
The difference between doing and being told
Imagine building a house. You could hand someone a detailed blueprint and say "follow this exactly." They might build you a house. But what happens when they discover the foundation is uneven? When they realise the electrical plans conflict with the plumbing? When they find something unexpected buried in the ground?
Someone following instructions stops and asks for new instructions. Someone with agency thinks, adapts, and solves the problem in front of them.
This is the fundamental difference between a program and an agent. Programs follow rails—efficient at getting from A to B, but only if the track exists and nothing blocks it. Agents create their own path.
What "agent" actually means
In AI, "agentic" gets thrown around like it's just another feature. It's not. It's a way of operating.
An agent doesn't just process information—it reasons about what that information means. It doesn't just execute tasks—it decides what tasks need doing. It doesn't just provide answers—it questions whether it has enough information to provide good answers.
Think about how you approach a complex case. You don't read through evidence in a fixed order, form a conclusion, and stop. You read something that sparks a question. That question leads you to look for specific evidence. That evidence reveals something unexpected, which changes your theory of the case. You're constantly thinking and doing and thinking again.
Your thoughts don't happen before your actions or after them. They happen during. This is interleaved reasoning—and it's what separates useful AI from fancy search.
Tools make the difference
Humans without tools can only talk. We can describe, explain, theorise. But we can't build, can't measure, can't create. Give us a hammer and we can build a house. Give us a pen and we can write a contract.
The same is true for AI. Without tools, even sophisticated models can only generate text. They can tell you what a legal brief should contain, but they can't research the cases that should be cited. They can explain contract law, but they can't draft the contract you need for your specific situation.
This is why agentic AI matters for legal work. Law isn't just knowing things—it's doing things. Researching precedents. Analysing documents. Comparing cases. Building arguments.
When we built Andri, we didn't just give it legal knowledge. We gave it the ability to act on that knowledge. To use tools. To research when it needs more information. To verify when it's uncertain. To adapt when it discovers something new.
When everything changes mid-case
Picture this: you're halfway through drafting a motion when you discover evidence that completely changes your understanding of the case. Not a small detail—something fundamental. Something that means the strategy you've been building for weeks might be wrong.
What do you do?
If you're following a script, you're stuck. The script doesn't account for this. But if you have agency, you pivot. You reconsider. You explore this new direction because that's what the case demands now.
This is what we mean by interleaved thinking. Andri doesn't just process your request and deliver a result. It thinks as it works. When it discovers something unexpected in case law, it pauses to consider what that means for your question. When it finds a gap in its knowledge, it seeks to fill that gap before continuing.
It's not following a predetermined path to an answer. It's figuring out what the right path is as it goes.
Why this matters for smaller firms
Large firms have teams of associates to research, analyse, and draft. They can throw resources at every angle of a case. Smaller firms have something more valuable: agility.
A small firm can pivot faster, think more creatively, respond more personally to each client's needs. But only with tools that match their thinking.
This is where agentic AI makes a real difference. It's not about replacing the human elements that make small firms effective—it's about amplifying them. When Andri can research while you strategise, analyse while you advocate, and adapt while you adjust your approach, you're not just keeping up with larger firms. You're operating at a level that's hard to match regardless of firm size.
The best legal work doesn't come from having the most resources. It comes from thinking clearly, adapting quickly, and not stopping until you find the right answer.
What we're building
We didn't set out to build another legal AI tool. We set out to build something that reasons the way lawyers reason. Something that doesn't just know the law, but understands how to practise it.
The future of legal AI isn't about computers that follow instructions well. It's about systems that solve problems—that think alongside you, adapt with you, and help you find solutions you might not have discovered alone.
That's agency. And that's what we've built into Andri.
Read also: why agentic reasoning is the only path to production legal AI, how our personalisation triangle works, and what agentic AI actually means in law.