Skip to content
Artekia
Artekia
Back to blog
AI Agents
5 min readBy Artekia

Shadow AI. The AI You Don't Control.

Shadow AI threatens confidentiality in law firms. Discover the real risks and how to implement a secure, controlled AI strategy.

shadow ai risks law firmslegal artificial intelligenceai in law firmsshadow ai riskslegal data securityunsupervised ai in law firmsai regulatory complianceclient confidentiality ailegal ai tools ChatGPT Copilot Geminiai strategy for law firms
Imagen de portada del artículo

Shadow AI in Law Firms: Real Risks and How to Respond

There's a phenomenon happening in most law firms that very few executives have on their radar. Their professionals are already using artificial intelligence. Not the one the firm has evaluated, approved, or supervised. The one each person has found on their own.

According to Wolters Kluwer's Future Ready Lawyer 2026 report, 92% of legal professionals already use at least one AI tool in their daily work. However, according to Thomson Reuters, only 22% of organizations have a defined AI strategy. The rest operate without visibility into what tools are being used, what data is being entered into them, or what risks they're assuming.

This gap between individual adoption and organizational governance is what's known in the tech world as shadow AI. And in a sector where client confidentiality is a professional obligation, not a preference, the consequences can be severe.

The Scope of the Problem

Data from Litify shows that 66% of legal professionals use ChatGPT, 42% Microsoft Copilot, and 24% Google Gemini. These are generalist tools, not designed for the legal sector, and in most cases they're used without the firm having control over the information that enters or leaves them.

A study by ACC and Everlaw with 657 in-house professionals from 30 countries revealed that 59% of in-house legal teams don't know if their external firms are using generative AI in their matters. And here's the most revealing part: 58% of adoption is being driven by in-house teams themselves, while only 1% is led by external firms.

Clients are getting ahead of the curve. And they're starting to demand transparency.

Documented Risks, Not Hypothetical

There are more than 600 recorded cases of AI hallucinations in legal contexts, according to Corporate Compliance Insights, involving 128 lawyers. The Mata v. Avianca case, where a lawyer presented six court cases invented by ChatGPT before a federal court in New York, is the most famous. But not the only one. In Johnson v. Dunn, an Alabama court disqualified an entire firm and referred the lawyers to the professional associations in all their jurisdictions.

Courts have been clear on one point: responsibility falls on the lawyer, not the tool.

Beyond hallucinations, the risk of data leakage is equally serious. When a professional enters confidential client information into a public AI tool, that data can end up on servers outside any corporate control. According to the 8am Legal Industry Report 2026, 46% of professionals cite data security as a barrier and 39% specifically mention the risk to professional privilege.

Why Banning Doesn't Solve Anything

Many firms have responded with internal bans. The evidence shows they don't work. What they achieve isn't eliminating use, but making it invisible.

Wolters Kluwer's analysis in its Future Ready Lawyer 2026 webinar points in a different direction: the most effective way to reduce shadow AI isn't stricter policy, but better design. When approved and secure tools are directly integrated into the firm's workflows, the incentive to resort to external tools drops significantly.

The Cost of Not Having a Strategy

Thomson Reuters data is compelling. Organizations with a defined AI strategy are twice as likely to see AI-linked revenue growth and are 3.5 times more likely to achieve significant benefits, compared to those without a clear plan.

The 64% of in-house legal teams expect to rely less on external firms thanks to AI, according to ACC and Everlaw. That means clients are building their own capabilities and starting to make decisions about which firms to hire based on their technological maturity and transparency.

Concrete Steps to Act

You don't need a 40-page document to get started. Just answer these questions:

  • What AI tools are the firm's professionals using, with or without approval?
  • What client data might be entering uncontrolled tools?
  • Is there an internal policy that defines what's allowed and what's not?
  • Have AI tools specific to the legal sector that meet confidentiality and GDPR requirements been evaluated?
  • Do clients know if the firm uses AI in their matters?

The first step is an informal audit. Ask teams, without judgment, what tools they use and for what. From there, define a minimum viable policy and evaluate secure alternatives.

The gap between individual use and organizational strategy doesn't close by itself. But identifying it is the necessary condition to start managing it.


Sources:

  • Wolters Kluwer, Future Ready Lawyer 2026 — 92% of legal professionals use at least one AI tool.
  • Thomson Reuters, 2025 Future of Professionals Report — Only 22% have an AI strategy; 2x revenue; 3.5x benefits.
  • Litify, State of AI Report 2025 — 66% ChatGPT, 42% Copilot, 24% Gemini.
  • ACC / Everlaw, GenAI's Growing Strategic Value 2025 — 59% of in-house teams don't know if their firms use AI; 64% expect to depend less on external firms.
  • 8am Legal Industry Report 2026 — 46% data security; 39% professional privilege.
  • Corporate Compliance Insights — More than 600 hallucination cases, 128 lawyers involved.
  • Wolters Kluwer, Future Ready Lawyer 2026 Webinar — Secure tool design vs. bans.