Legal AI Has No Standard Playbook. That's the Most Important Thing We Heard at ILTA Evolve.
- Patrick Bryden
- May 5
- 4 min read
Updated: May 6
Just like that - ILTA Evolve has come and gone! It was only two weeks ago that we highlighted the sessions security leaders needed to attend. And they delivered! Now that we’re back from Denver and have had time to review our notes, we've come away with one big takeaway.
Every firm at ILTA Evolve was talking about AI. Not all of them were talking about the same thing.
Some were deep in tool evaluation. Others were drafting prompt governance policies. While a fair share were trying to figure out how to approach securing AI without stifling firm growth. And some we’re just trying to find a place to start. The conversations were wide-ranging, which is the big takeaway in itself.

Everyone Is Figuring This in Real Time
Walk the floor at ILTA Evolve, and you find firms at completely different points on the adoption curve. A knowledge management team is rolling out a GenAI contract tool while the security team scrambles to catch up. A managing partner is fielding client questions about AI use on active matters, despite the absence of a formal policy. An AmLaw firm is mid-rollout on a new legal AI platform, while its governance function is still working out how to respond.
This fragmentation isn't a sign that firms aren't taking AI seriously (they are!). It's a sign that AI adoption in the legal industry isn’t following a clean top-down path. It’s spreading laterally across the firm, driven by individual practice groups, enthusiastic partners, or vendors already inside the door.
And when AI adoption happens without a coordinated, board-driven strategy, it becomes piecemeal by default. Different teams solving different problems with different tools — and no one with a clear view of the whole. That's how exposure gaps form, adoption stalls, and data incidents happen.
Why the Stakes Are High Enough to Force a Decision
The client pressure is no longer theoretical. In-house counsel are actively asking outside firms how they're using AI, and in some cases, reviewing time entries to find out. According to the ACC/Everlaw GenAI Survey, 64% of in-house teams now expect to rely less on outside counsel as they build AI capabilities internally. The firms that can't demonstrate how they're using AI aren't just losing the conversation — they're at risk of losing the work.
FOMO-driven AI adoption without a framework leads to specific outcomes, and probably not the ones firms want, including tool sprawl, shadow AI, and data exposure events. These are the ones that can impact the trust premium that legal firms are so careful to protect. All it takes is sensitive data passed into a model without anyone realizing it to cause reputational and legal damage, triggering the trust discount.
This all stems from the lack of a consensus playbook, which isn't just an operational inconvenience. It’s an operational liability.
What Does "No Consensus" Actually Look Like?
Legal firms of all sizes are approaching the same problem from at least three distinct starting points:
Tool management: Which platforms are approved? Who can use what? How do we evaluate new vendors? This is the most common entry point, which keeps IT teams perpetually behind the curve as new tools emerge faster than procurement cycles.
Prompt governance: What are attorneys allowed to ask AI? What data can enter a prompt? How do we ensure outputs are reviewed before they reach clients? Firms taking this approach are thinking about the behavior layer, not just the tooling.
Data security: Where is sensitive data going? Who controls access? What happens when a file leaves our perimeter and enters a third-party AI workflow? This is arguably the hardest question, and the one that fewest firms have answered well.
Most firms are touching all three. Few have connected them into a coherent strategy.
The Firms Getting It Right Start With the Destination
The big takeaway from the Confidencial team wasn’t a tool recommendation. Rather, it was a mindset shift - legal firms need to separate the destination from the vehicle.
The firms making progress on AI aren't asking "which tool should we use?" first. They're asking, "What are we trying to achieve?" and building backward from there. The strategy needs to be durable, regardless of the tools in use.
In practice, this means a few things. It means getting board members AI-literate enough to help define the goal, not just approve a budget line. This means they need to be a part of designing the process, not just reading a finished deck. It means running short pilot cycles with a clear hypothesis, expanding what works and dropping what doesn't. And it means measuring outcomes that actually matter: win rates, response speed, client retention — not usage percentages or accuracy scores that don't connect to firm performance.
The firms that will pull ahead aren't the ones with the most tools. They're the ones who know what they're solving for.
Where Confidencial Fits
Here's the problem with waiting to finalize the strategy before thinking about data security: the data is already moving.
Client documents, matter files, and privileged communications are already passing through unapproved AI workflows:
A summer associate experimenting with a tool that sits outside the firm's governance framework
A junior partner feeding client data into a public LLM to tidy up a draft
A senior partner sharing deal documents with an AI tool to prep for a client negotiation
None of it is malicious, and all of it is exposure.
Confidencial protects the data itself. Encryption and access controls travel with the file — through every tool, every workflow, every hand-off. It doesn't matter which AI platform a firm lands on, or whether the strategy is still six months from finalization. The protection is already there.
For law firms navigating AI without a consensus playbook, that's the one layer that holds regardless of what else changes.




Comments