AI Regulations in the Legal Sector - What Obligations Does the EU AI Act Bring and How to Prepare
Discover how the EU AI Act affects your law firm: obligations, risk classification, and key steps to comply with AI regulations in the legal sector.

AI Regulations in the Legal Sector - What Obligations Does the EU AI Act Bring and How to Prepare
The EU AI Act (Regulation EU 2024/1689) is the world's first comprehensive legislation on artificial intelligence. It entered into force on August 1, 2024, applies in a phased manner, and has direct implications for any law firm using or planning to use AI tools in their processes.
If you manage a law office, advisory firm, or arbitration practice in Europe, this affects you. Not as a future possibility, but as an evolving regulatory obligation.
The Focus is Risk, Not Prohibition
The regulation doesn't prohibit AI in the legal sector. What it does is establish proportional obligations based on risk. The greater the potential impact of an AI system on fundamental rights, the stricter the rules.
AI systems are classified into four levels (prohibited, high-risk, limited-risk, and minimal-risk). What's relevant for the legal sector is in the second category.
What Annex III Says About the Legal Sector
Annex III of the regulation lists AI uses considered high-risk by default. In the "Administration of Justice and Democratic Processes" section, it expressly includes AI systems designed to assist in preparing judicial decisions, evaluating evidence, or interpreting facts and applicable regulations.
If an AI tool materially influences the outcome of a legal determination, it falls into the high-risk category. It doesn't matter if used by a court or private law firm. What matters is the function it performs.
There's a gray area that hasn't been fully resolved. Is a system that classifies documents in due diligence high-risk? It depends. If its output directly conditions the legal decision, probably yes. If it acts as a preparatory support tool without influencing the final determination, it might not be. Article 6.3 allows providers to argue that their system, despite being in Annex III, doesn't pose a significant risk, but they must document that assessment before marketing it.
The Timeline Has Changed with the Digital Omnibus
Obligations for high-risk systems were to apply from August 2, 2026. But in November 2025, the European Commission proposed the Digital Omnibus package, which among other things delays those deadlines.
In March 2026, the Council of the EU adopted its negotiating position and set new deadlines. December 2, 2027 for autonomous high-risk systems (Annex III) and August 2, 2028 for those integrated into regulated products.
The reason is that the harmonized technical standards companies need to comply still aren't ready. CEN-CENELEC, the body responsible for drafting them, has indicated they might not be available before late 2026. The Council is now negotiating with the European Parliament. There's political will to close quickly, but the final text may change.
What's Already in Force
The Digital Omnibus delay affects specific high-risk obligations. But several parts of the AI Act are already applicable.
Since February 2025, prohibited practices are in force. Subliminal manipulation, exploitation of vulnerabilities, generalized social scoring, and real-time remote biometric identification in public spaces (with limited exceptions for security). It's also been mandatory since that date to ensure an adequate level of AI literacy among staff working with these systems.
Since August 2025, obligations for general-purpose AI models apply to their providers. This doesn't directly affect law firms as users, but it does condition what guarantees they can demand from their technology providers.
Concrete Obligations for AI High-Risk Users
If your firm deploys an AI system classified as high-risk, the regulation assigns you the role of "deployer" or professional user. These are the main obligations.
Effective human oversight. You must designate trained and competent people who can intervene, modify, or override the system's decisions. This isn't nominal oversight. The regulation demands real intervention capacity.
Use according to instructions. The system must be used according to the provider's instructions. If a law firm modifies the intended use or makes substantial changes, it may assume obligations equivalent to those of a provider, which are considerably more demanding.
Record retention. The logs or records generated by the system must be stored for an adequate period. In case of incident, you must be able to reconstruct what the system did and why.
Transparency with affected persons. If the AI system is used to make or assist decisions affecting natural persons, they must be informed. In the legal context, this raises very specific questions. Must you inform the client that AI was used to review their contract?
Impact assessment. In certain cases (especially if dealing with public entities or uses affecting fundamental rights) a prior impact assessment is required before deployment.
Three Steps a Legal Executive Can Take Today
The delay to December 2027 isn't an invitation to wait. It's a window to prepare without pressure.
Inventory.
What AI tools are used in your firm? Who uses them? For what processes? Not just corporate ones, but also those professionals use on their own. According to Litify, 78% of legal professionals already use generative AI personally, but only 34% of firms have organizational adoption.
Classify.
Which of those tools could fall into the high-risk category? Does any materially influence legal decisions? This preliminary classification allows sizing the compliance effort.
Ask the provider.
If you use a legal-specific AI tool, ask them this:
- What documentation do they have prepared to comply with the AI Act?
- Is their system registered as high-risk?
- How do they manage the data you input? The answers will tell you a lot about their maturity level.
The Data That Summarizes the Situation
According to Thomson Reuters, only 22% of legal organizations have a defined AI strategy. The remaining 78% are using AI without a clear framework. The AI Act doesn't require having a strategy, but it does require complying with concrete obligations that, without a strategy, are much harder to implement.
The deadline has moved. The problem hasn't.
Sources
- EU AI Act — Regulation (EU) 2024/1689
- Annex III — High-Risk AI Systems (artificialintelligenceact.eu/annex/3)
- EU Council Position on the Digital Omnibus (March 2026)
- IAPP — Digital Omnibus Analysis
- Litify State of AI Report 2025
- 8am Legal Industry Report 2026
- Thomson Reuters 2026
- DLA Piper — Obligations under the EU AI Act, August 2025