AI in law firms: how to protect confidentiality and professional secrecy
AI in law firms: how to safeguard confidentiality and comply with professional secrecy when using AI tools.

AI in law firms: how to protect confidentiality and professional secrecy
The adoption of artificial intelligence in the legal sector is moving fast, and so is the risk. 56% of legal professionals cite data privacy as the main barrier to using AI, according to the 2025 ACEDS and Secretariat report. It is the most frequently mentioned-and probably the most reasonable-objection. The problem, however, is not AI as a technology, but which tools are used and how the data entered into them are managed.
This article examines what data‑protection regulations actually require when using AI in a firm, the differences between tool types, and the technical and organisational measures that enable compliant use with the duty of professional secrecy.
The risk lies in everyday use, not in massive breaches
The most common scenario is not a sophisticated cyber‑attack. It is a lawyer copying a contract clause into ChatGPT to “get help improving it,” or uploading a PDF file to request a summary. At that moment, information covered by professional secrecy leaves the firm’s control perimeter.
In a recent interview, Sam Altman, founder of OpenAI, confirmed what cybersecurity leaders have already warned: conversations with the public version of ChatGPT are not protected by legal confidentiality. The data can be used to retrain models, may be reviewed by the provider’s staff during moderation, and could be exposed in the event of a breach.
The Spanish Data Protection Agency (AEPD) has identified artificial intelligence as one of its priority challenges, and AI‑related complaints have risen markedly. The legal sector, which handles highly sensitive information by definition, is in the spotlight.
What happens to your data depending on the tool
AI tools are not equivalent in terms of privacy. Practically, they can be grouped into three scenarios.
Public, free version
Tools such as ChatGPT, Gemini or Copilot in their consumer versions. By default, user‑provided data may be used to train the model. This option can be turned off in the settings, but the default configuration does not protect the user. Servers are usually located outside the EU. There is no contractual guarantee of confidentiality. Using these tools with information covered by professional secrecy poses a direct regulatory risk.
Standard enterprise version
ChatGPT Enterprise, Copilot integrated in Microsoft 365, Azure OpenAI Service, Gemini for Workspace. Substantial improvement over the consumer version. Data are not used to train models. Data‑processing agreements and confidentiality clauses are in place. However, by default both OpenAI and Azure OpenAI retain data for up to 30 days for abuse monitoring, and provider personnel may access them under certain circumstances. A Zero Data Retention option exists but is not activated automatically; it requires a specific approval process and is usually reserved for direct enterprise customers.
Specialized tool or private deployment
Two modalities with the highest guarantees. First, legal‑sector‑specific AI tools hosted in the EU, with robust encryption, an explicit contractual commitment not to use data for model training, and documented compliance with GDPR and the EU AI Act. Second, private deployments (self‑hosted or in the client’s own cloud) where data never leave the environment controlled by the firm.
The gap between the first and third scenario is huge, but it is not obvious from the user’s experience. Purchasing an enterprise version does not automatically place you in the third scenario. Reading contracts and understanding what each plan covers is part of the job.
What the regulations require
The regulatory framework applicable to the use of AI with client data in a firm covers several fronts.
GDPR Art. 5.1.f – Integrity and confidentiality: Personal data must be processed ensuring security. Using a tool that does not provide concrete contractual guarantees may breach this principle.
GDPR Art. 32 – Appropriate technical and organisational measures: Includes encryption, access controls, audit trails and incident‑response procedures.
GDPR Art. 33 – Notification of breaches to the AEPD within 72 hours: A data leak through an AI tool is a breach.
Sanctions: Up to €20 million or 4 % of global turnover for very serious infringements. Public AEPD resolutions in 2025 show that fines for professional firms are not hypothetical. Firms have been penalised for seemingly minor errors, such as copying a third party into an email or publishing photos without clear consent.
EU AI Act Art. 4 – AI literacy: Effective from February 2025, it obliges organisations deploying AI systems to train the people who use them. It is a legal duty, not a recommendation.
Statute of the Legal Profession: The duty of secrecy covers all information known by reason of professional activity. Introducing that information into a third‑party system without adequate guarantees can constitute a deontological breach, with disciplinary consequences.
Concrete measures to protect confidentiality
Actions a legal manager can drive without being technical, ordered from most urgent to most structural.
Internal AI‑use policy. A 3‑5‑page document that defines which tools are authorised, what type of information must never be entered, which corporate accounts must be used and what to do if misuse is detected. Without a written policy, internal sanctions are impossible and diligence cannot be demonstrated to the AEPD.
Audit of actual use. Open conversation with the team, non‑punitive, to learn which tools are being used today, with which accounts and for what tasks. This exercise reveals the “shadow” use that is almost always already happening.
Approved corporate alternatives. Banning tools without offering alternatives drives clandestine use. Provide the team with authorised tools: an enterprise version with a proper contract, a legal‑sector AI tool with GDPR guarantees, or a private deployment for the most sensitive cases.
Anonymisation and minimisation. GDPR’s minimisation principle requires processing only the data necessary. Replacing names with initials, removing identifiers and reducing context to the essentials lowers risk even with authorised tools.
Training and traceability. Article 4 of the AI Regulation demands literacy. A training session with practical cases, attendance records and documented material satisfies the requirement and creates evidence.
Impact assessment (DPIA). When implementing an AI tool that processes client data at scale, a DPIA is mandatory. It is the documentary proof of a prior risk analysis.
How to evaluate an AI provider before signing
A provider that claims to be “GDPR‑compatible” must be able to answer these questions in writing:
- Where are the data processed and stored (country and region)?
- Are the data used to train models, even in anonymised form?
- How long are prompts and responses retained?
- Which provider personnel can access the content and under what circumstances?
- Is there a data‑processing agreement (DPA) and an explicit confidentiality clause?
- How is a security breach notified and within what timeframe?
- What certifications does the provider hold (ISO 27001, SOC 2, National Security Scheme) and has a system impact assessment been performed?
If the provider evades any of these questions or gives vague answers, it is a clear signal that they are not ready to handle information covered by professional secrecy.
Using AI responsibly is a structural decision
92 % of legal professionals already use at least one AI tool, according to Wolters Kluwer (Future Ready Lawyer 2026). The question has shifted from whether to incorporate AI to how to do it without compromising the client relationship.
The difference between responsible use and risky use is not the technology; it is the decisions made before use. Internal policy, provider selection, team training and deployment architecture separate a useful tool from a regulatory hazard.
If you want to explore which AI deployment model makes sense for your firm (a specialised tool, an enterprise version with reinforced guarantees, or a custom agent with private infrastructure), we can help you analyse it. Write to us and we’ll reply within 24 hours.
Sources
- ACEDS + Secretariat, Legal Industry Report on AI Adoption, 2025.
- Wolters Kluwer, Future Ready Lawyer Survey, 2026.
- European Data Protection Board (EDPB), ChatGPT Taskforce Report, 2024‑2025.
- Spanish Data Protection Agency, Memory 2024 and public resolutions 2025.
- Regulation (EU) 2016/679 (GDPR).
- Regulation (EU) 2024/1689 (EU AI Act).
- Microsoft Azure, documentation on Zero Data Retention and Modified Abuse Monitoring, 2025.
- OpenAI, Enterprise Use Policy, 2025.