top of page
Search

The Three Starting Points: How Law Firms Are Actually Approaching AI Governance


Every law firm is dealing with the same problem. Not every law firm handles it the same way.

Take a straw poll of any legal firm, Am Law 10 or a small boutique regional firm, and you’ll find firms at completely different points on the AI adoption curve — some mid-rollout on an enterprise platform, others still debating whether to allow ChatGPT on firm devices, and others waiting for the early adopters to see what happens when AI is rolled out at the enterprise scale. The conversations don't look the same because their starting points don't.


This is the real story behind legal AI governance right now. Not the tools. Not the models. The absence of a shared approach, and how this gap is quietly costing firms that haven't noticed it yet.


Why the Starting Point Matters More Than the Strategy

The data on adoption tells a clear story. According to the 2026 Legal Industry Report from 8am, 69% of legal professionals are already using general-purpose AI tools for work, up from 2025's 29%. And while Individual lawyers are moving fast, their firms are not.


54% of firms have no plan to provide training on responsible AI use, and 43% have no plan to develop a formal governance policy at all. That means the majority of AI use in law firms today is happening without oversight, without controls, and without a clear understanding of where data is going.


And just to be clear, firms aren't intentionally ignoring the problem. 


They're approaching it from different angles, in line with their risk tolerance. This fragmentation is creating its own set of risks.


The Three Starting Points

Legal firms of all sizes are approaching the same governance challenge from at least three distinct starting points. And while most are touching all three, very few firms, if any, have connected them into a coherent, holistic strategy. 


Starting Point 1: Tool Management

Tool management is the most common entry point, and the one that keeps IT teams perpetually behind the curve. Which platforms are approved? Who can use them? How do we evaluate new vendors before they're already inside the firm? It sounds manageable until you factor in the market's pace. New tools emerge faster than procurement cycles can absorb them, which means the approved list is always a snapshot of yesterday's landscape.


Half of enterprise AI leaders report their organizations still rely primarily on public tools like ChatGPT or Copilot without additional governance layers. And with law firms among the most data-sensitive verticals in the enterprise, the exposure that comes with relying on consumer-grade tools for client work isn't always apparent until something goes wrong.


On a positive note, however, general counsel are beginning to assess AI systems less on convenience and more on whether their design and governance mechanisms support defensible legal workflows. This shift in approach allows firms to move from "Can this tool increase efficiency?" to "Can this tool withstand scrutiny if challenged?" That's the right frame, though most firms aren't there yet.


What is shadow AI in a law firm? Shadow AI refers to the use of AI tools by attorneys or staff without official approval from IT or security teams. In a law firm context, this typically means associates or partners using consumer AI platforms, including ChatGPT, Claude, and Gemini, to process client documents, draft work product, or summarize privileged communications outside any governance framework. It's rarely malicious. It's almost always exposure.

Starting Point 2: Prompt Governance

While some firms are focusing on tool selection, others are working on the behavior layer, including defining what attorneys are allowed to ask AI, what data can be included in a prompt, and how outputs are reviewed before they reach clients.


This is harder than it sounds. Writing a policy is one thing. Getting 400 attorneys to follow it consistently is another. 79% of legal professionals use AI tools, but 44% of law firms have not implemented formal governance policies. This is the limitation of policies - they are only effective when followed.



Blanket bans aren't the answer. As one legal partner put it: "Lawyers will find a way to use AI anyway — but without controls, they'll do so in an uncontrolled and potentially dangerous manner." The question for prompt governance isn't whether attorneys are using AI. It's whether the firm knows exactly how it’s being used.


What should a law firm AI acceptable use policy cover? A law firm AI acceptable use policy should define which tools are approved for client-related work, what categories of data attorneys may input into prompts, how AI-generated outputs must be reviewed before use, and how violations are handled. The ABA's Formal Opinion 512 establishes the ethical baseline — lawyers must have a reasonable understanding of AI capabilities and limitations. The policy operationalizes that obligation.

Starting Point 3: Data Security

Data security is the hardest starting point, and the one that fewest firms have answered well. Not because firms don't care, but because the problem is structurally different from the first two.

Tool management and prompt governance are both policy problems. You can write rules, train attorneys, and build an approved list, while data security is a technical problem that policy cannot fully solve. This is because the data doesn't wait for governance to catch up.


Here's what actually happens inside a firm navigating AI without a data security layer. A summer associate summarizes a deposition transcript using a tool not on the firm's approved list. A junior partner feeds client documents into a public LLM to tighten a draft before a deadline. A senior partner shares deal files with an AI tool to prep for a negotiation. None of it is malicious. All of it is exposure, and none of it shows up in an audit log until something goes wrong.


This is the shadow AI problem at its most consequential. Once a privileged document enters an external AI pipeline, the permissions it had in the firm's document management system don't travel with it. The file leaves, and the access controls stay behind. The ethical wall that existed in iManage or NetDocuments evaporates the moment the text is pasted into a prompt or vectorized into a RAG database.


The client side is moving faster. 64% of in-house teams now expect to rely less on outside counsel as they build AI capabilities internally, while 60% don't know whether their outside firms are using AI on their active matters. That transparency gap is closing. Outside counsel guidelines are already beginning to require verifiable AI governance, not just a policy document. The firms that can demonstrate technical controls over client data, rather than just describe them, will have a material advantage when it does.


The data security starting point isn't the third step. For most firms, it's the one that's already overdue.


Why does legal AI governance fail at the data layer? Most legal AI governance frameworks focus on policy and tool selection — what's approved, what's prohibited. They stop short of the data layer itself. When a privileged document enters an AI pipeline, the permissions attached to it in the firm's document management system don't travel with it. The file leaves. The access controls stay behind. That's the structural gap that policy alone cannot close.

Why Most Firms Are Touching All Three, and Connecting None


The pattern that emerges across the industry is consistent: firms are working on tool management, prompt governance, and data security simultaneously, but in separate workstreams, with separate owners, and no shared framework tying them together.


Knowledge management runs a GenAI pilot. The security team finds out afterward. A managing partner fields a client question about AI use on an active matter before a formal policy exists. An associate uses a tool outside the approved list because the approved list doesn't do what they need.


The biggest surprise of 2026, according to multiple legal AI analysts, will be how many AI pilots quietly fail — not because the models didn't work, but because firms underestimated governance, workflow design, and change management. The gap between "AI works" and "AI is trusted" is where firms will lose ground.


The firms pulling ahead aren't the ones with the most tools or the most ambitious pilots. They're the ones who asked "what are we trying to achieve?" before deploying anything, and built backward from the answer. Tool selection, prompt policy, and data controls all follow from a clear destination. Without one, all three remain disconnected starting points.


The Layer That Holds Regardless of Where You Start

Here's the problem with treating data security as the third step: the data is already moving.

Client documents, matter files, and privileged communications are passing through AI workflows whether or not those workflows have been approved. A junior associate using a public LLM to tidy up a draft. A partner shares deal documents with a tool to prep for a negotiation. None of it is malicious. All of it is exposure.

The data-centric approach to legal AI security addresses this by moving protection to the file itself — encryption and access controls that travel with the document through every tool, every workflow, and every hand-off. It doesn't matter which AI platform a firm chooses or whether the governance strategy is still 6 months from finalization. Protection that travels with the file works regardless of where a firm starts.

 
 
 

Comments


bottom of page